Mental models of audit and feedback in primary care settings

Mental models of audit and feedback in primary care settings Background: Audit and feedback has been shown to be instrumental in improving quality of care, particularly in outpatient settings. The mental model individuals and organizations hold regarding audit and feedback can moderate its effectiveness, yet this has received limited study in the quality improvement literature. In this study we sought to uncover patterns in mental models of current feedback practices within high- and low-performing healthcare facilities. Methods: We purposively sampled 16 geographically dispersed VA hospitals based on high and low performance on a set of chronic and preventive care measures. We interviewed up to 4 personnel from each location (n = 48) to determine the facility’s receptivity to audit and feedback practices. Interview transcripts were analyzed via content and framework analysis to identify emergent themes. Results: We found high variability in the mental models of audit and feedback, which we organized into positive and negative themes. We were unable to associate mental models of audit and feedback with clinical performance due to high variance in facility performance over time. Positive mental models exhibit perceived utility of audit and feedback practices in improving performance; whereas, negative mental models did not. Conclusions: Results speak to the variability of mental models of feedback, highlighting how facilities perceive current audit and feedback practices. Findings are consistent with prior research in that variability in feedback mental models is associated with lower performance.; Future research should seek to empirically link mental models revealed in this paper to high and low levels of clinical performance. Keywords: Barriers and facilitators for change, Organizational implementation strategies, Research policy, Research funding Background performed multiple times, providing the feedback recipi- The Institute of Medicine (IOM) strongly advocates the ent multiple opportunities to address and change the be- use of performance measures as a critical step toward havior in question. improving quality of care [1, 2]. Part of the mechanism However, a recent Cochrane review concluded that through which performance measures improve quality of audit-and-feedback’s effectiveness is highly variable, de- care is as a source of feedback for both individual pending on factors such as who provides the feedback, the healthcare providers and healthcare organizations [3]. format in which the feedback is provided, and whether Audit and feedback is particularly suitable in primary goals or action plans are included as part of the feedback care settings, where a higher incidence of chronic condi- [4, 5]. Related audit-and-feedback work recommended a tions is managed, and thus, the same set of tasks is moratorium on trials comparing audit and feedback to usual care, advocating instead for studies that examine mechanisms of action, which may help determine how to * Correspondence: hysong@bcm.edu; sylvia.hysong@va.gov; http://www.hsrd. optimize feedback for maximum effect [5]. houston.med.va.gov The concept of mental models may be an important Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA factor modifying the effect of audit and feedback on Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX 77030, USA Full list of author information is available at the end of the article © The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Hysong et al. Implementation Science (2018) 13:73 Page 2 of 11 organizational behaviors and outcomes. Mental models with high versus low-performing outpatient facilities (as are cognitive representations of concepts or phenomena. measured by a set of chronic and preventive outpatient Although individuals can form mental models about any care clinical performance measures). concept or phenomena, all mental models share several characteristics: (a) they are based on a person’s (or Methods group’s) belief of the truth, not necessarily on the truth Design itself (i.e., mental models of a phenomenon can be in- This research consists of qualitative, content, and frame- accurate), (b) mental models are simpler than the work analyses of telephone interviews with primary care phenomenon they represent, as they are often heuristic- personnel and facility leadership at 16 US Department of ally based, (c) they are composed of knowledge, behav- Veterans Affairs (VA) Medical Centers, employing a iors, and attitudes, and (d) they are formed from cross-sectional design with purposive, key-informant sam- interactions with the environment and other people [6, pling guided by preliminary analyses of clinical perform- 7] People may form mental models about any concept ance data. Methods for this work have been described or phenomenon through processing information, extensively elsewhere in this journal [13] and are summa- whether accurately or through the “gist,” see [8, 9]; men- rized herein. The study protocol was reviewed and ap- tal models are thought to form the basis of reasoning proved by the Institutional Review Board at Baylor College and have been shown in other fields of research to of Medicine. We relied on the Consolidated Criteria for influence behavior (e.g., shared mental models in Reporting Qualitative Research (COREQ) guidelines for teams positively influences team performance when reporting the results herein (see Additional file 1). mental models are accurate and consistent within the team [10, 11]. For example, research outside of Research team and reflexivity healthcare suggests that feedback can enhance gains Interviews were conducted by research staff (Kristen Smi- from training and education programs [12], such that tham, master’s level industrial/organizational psychologist, learners form mental models that are more accurate Melissa Knox, registered dietitian; and Richard SoRelle, and positive than when feedback is inadequate or not bachelor’s in sociology) trained specifically for this study delivered. by the research investigators (Drs. Sylvia Hysong, PhD However, it is important to note that mental models can and Paul Haidet, MD). Dr. Hysong (female) is an indus- manifest at various levels within an organization (i.e., per- trial/organizational psychologist; Dr. Haidet (male) is a son, team, organization), such as those within a primary general internist; both researchers are experienced in con- care clinic. A facility, hospital, or organization-level men- ducting, facilitating, and training personnel (see Hysong et tal model represents the beliefs and perceptions of an al. [13] in this journal for details of interviewer training organization such that these influences are felt by individ- protocol) and were research investigators at the time the ual members of the organization. Research on clinical study was conducted. The research team also received practice guideline (CPG) implementation has established additional training on qualitative analysis using Atlas.ti empirical links between a hospital’s mental model of (the analytic software used in this project) specifically tai- guidelines and their subsequent success at guideline im- lored for this study from a professional consulting firm plementation: specifically, compared to facilities who specializing in qualitative analysis. The research team had struggled with CPG implementation, facilities who were no prior relationship with any of the interviewees prior to more successful at CPG implementation exhibited a clear, study commencement. focused mental model of guidelines, and a tendency to use feedback as a source of learning [3]. However, as the Participants aforementioned study was not designed a priori to study We interviewed up to four participants (total n =48) at audit-and-feedback, it lacked detail regarding the facilities’ each of 16 geographically dispersed VA Medical Centers, mental models about the utility of audit-and-feedback, as key informants of the mental models and culture of which could have explained why some facilities were their respective facilities. Participants were drawn from more likely than others to add audit-and-feedback to the following groups: the facility director, the chief of their arsenal of implementation tools. The present the primary care service, full-time primary care physi- study directly addresses this gap in order to better cians and physician extenders, and full-time primary shed light on the link between feedback and health- care nurses. We sought one interviewee per role cat- care facility effectiveness. egory and sought to interview clinicians with at least 3 years in their current position to ensure they would Study objective have sufficient organizational experience to form more This study aimed to identify facility-level mental models complete mental models. Table 1 summarizes which about the utility of clinical audit and feedback associated roles were interviewed at each facility. 5/16 facility Hysong et al. Implementation Science (2018) 13:73 Page 3 of 11 directors and 3/16 primary care chiefs declined to par- specific measures used to select the 16 sites in the study, ticipate (75% response rate for facility leaders); securing which fell into one of four categories: high performers 24 clinician interviews (12 MD, 12 RN) required 104 in- (the four sites with highest average performance across vitations [13]. measures), low performers (the four with lowest average performance across measures), consistently moderate Site selection performers (the four with moderate average performance Sites were selected using a purposive stratified approach and the lowest variability across measures), and highly based on their scores on a profile of outpatient clinical variable facilities (i.e., the four with moderate average performance measures from 2007 to 2008 extracted performance and the highest variability across mea- from VA’s External Peer Review Program (EPRP) one of sures). Our study protocol, published earlier in this jour- VA’s data sources for monitoring clinical performance nal [13], describes the method of calculating the used by VA leadership to prioritize the quality areas performance categories in greater detail. Table 1 summa- needing most attention. EPRP is “a random chart ab- rizes basic site characteristics grouped by performance straction process conducted by an external contractor to category. audit performance at all VA facilities on numerous qual- ity of care indicators, including those related to compli- Procedure ance with clinical practice guidelines” [14]. The program Participants (key informants) were invited to enroll in tracks over 90 measures along multiple domains of the study initially by e-mail followed by phone calls if value, including quality of care for chronic conditions e-mail contact was not successful. Participants were usually treated in outpatient settings such as diabetes, interviewed individually once for 1 h via telephone by a depression, tobacco use cessation, ischemic heart dis- trained research team member at a mutually agreed ease, cardiopulmonary disease, and hypertension. upon time; interviews were audio-recorded with the par- For site selection, we focused on metrics of chronic ticipant’s consent (four participants agreed to participate and preventative care, as patients may return to the pro- but declined to be audio recorded. In these cases, a sec- vider for follow-up care (as opposed to an urgent care ond research team member was present to take typed, clinic or emergency department). Table 2 displays the detailed notes during the interview). Only participants Table 1 Site characteristics and roles interviewed at each site Performance category Site Size (# Residents Number Interviewee role of per 10k of FD PCC MD RN unique patients† primary patients) care personnel High performers B 27,222 0.00 35 ✓✓ ✓ ✓ H 27,851 8.62 62 ✓✓ M 43,845 18.25 56 ✓✓ R 49,813 31.42 83 ✓ Consistently moderate D 44,022 26.18 115 ✓✓ ✓ ✓ E 63,313 10.63 94 ✓✓ ✓ ✓ K 46,373 56.93 125 ✓✓ ✓ P 80,022 21.45 54 ✓✓ Highly variable A 60,528 23.15 143 ✓✓ ✓ ✓ G 49,309 26.24 27 ✓✓ ✓ L 21,327 7.03 30 ✓✓ Q 39,820 2.89 10 ✓✓ Low performers C 44,391 27.51 88 ✓✓ ✓ ✓ F 19,609 0.00 46 ✓✓ ✓ ✓ J 58,630 24.94 116 ✓✓ ✓ ✓ N 24,795 0.00 23 ✓✓ ✓ Note: sites listed in italic type were sites excluded from the study due to insufficient data (either insufficient number of interviews or insufficient information about mental models was provided by the interviewees of a site during the interviews, thus making any findings from that site unstable). FD facility director, PCC primary care chief, MD physician, RN registered nurse. Number of residents per 10k patients is intended as a measure of the strength of the academic mission of the facility, which has been shown to be a nuanced indicator than the dichotomous medical school affiliation measure used traditional [25] Hysong et al. Implementation Science (2018) 13:73 Page 4 of 11 Table 2 Clinical performance measures employed in site their original performance categories throughout the life selection of the study. Figure 1 presents the average performance EPRP mnemonic Short description scores of all available medical centers and their respective standard deviations. The colored data points represent the c7n DM-outpatient-foot sensory exam using monofilament sites selected for the study; the respective colors represent Dmg23 DM-outpatient-HbA1 > 9 or their performance arms as determined for 2007–2008 (see not done (poor control) in the past year Fig. 1 note). Performance for the rest of the medical cen- Dmg28 DM-outpatient-BP > =160/100 or not done ters is depicted by the black data points and is presented purely to show the participating sites’ relative standing. As Dmg31h DM-outpatient-retinal exam, timely by disease (HEDIS) shown in the upper graph in the figure, the sites clustered Dmg7n DM-outpatient-LDL-C < 120 cleanly into the four desired arms in 2007–2008. However, as shown in the lower graph, the same sites shifted posi- htn10 HTN-outpatient-Dx HTN and BP > = 160/100 or not recorded tions in 2011–2012; for example, site C, which was origin- ally a low performer in 2008, is the third highest htn9 HTN-outpatient-Dx HTN and BP < = 140/90 performer of the participating sites in 2011–2012, more p1 Immunizations-pneumococcal importantly, the sites are spread throughout the con- outpatient-nexus tinuum of performance, rather than forming clean, dis- p22 Immunizations-outpatient-influenza creet clusters as they did in 2008. Consequently, our ages 50–64- original plan to make inferences about the similarities in p3h CA-women aged 50–69 screened mental models based on the original performance clusters for breast cancer was no longer viable nor was it possible to re-categorize p4h CA-women aged 21–64 screened the sites in 2011–2012 for the same purpose. We there- for cervical cancer in the past 3 years fore adopted an alternate analytic approach to our re- p6h CA-patients receiving appropriate search question; rather than explain differences in mental colorectal cancer screening (HEDIS) models among sites of known clinical performance, we smg2n Tobacco-outpatient-used in the past sought to explore differences in clinical performance 12 months-nexus-non-MH among sites with similar mental models. smg6 Tobacco-outpatient-intervention-annual-non-MH with referral and counseling Data analysis smg7 Tobacco-outpatient-meds Identifying mental models for each site offered-nexus-non-MH Interview recordings were transcribed and analyzed Used with permission from Hysong, Teal, Khan, and Haidet [9] using techniques adapted from framework-based analysis and researchers were present during the interviews. Key [15] and content analysis [16, 17] using Atlas.ti 6.2 [18]. informants answered questions about (a) the types of In the instances where no audio recording was available EPRP information the facilities receive, (b) the types of (n = 4), interviewer field notes served as the data source quality/clinical performance information they actively for analysis. seek out, (c) opinions and attitudes about the utility of Open coding began with one of the four research team EPRP data (with specific emphasis on the role of meet- coders reviewing the transcript, identifying and tagging ing performance objectives within the facility), (d) how text passages indicative of facilities’ mental models of they use the information they receive and/or seek out feedback within the individual interviews using an a within the facility, and (e) any additional sources of in- priori code list (e.g., positive/negative, concerns of trust, formation or strategies they might use to improve the fa- or credibility of feedback) to which they could add emer- cility performance (see Additional file 2 for interview gent codes as needed; coders proceeded with axial cod- guide). The participants sampled were selected as key ing using a constant comparative approach [13]. For the leaders and stakeholders at the facility, making them purposes of coding, we defined mental models as mental ideally suited for responding to facility-level questions states regarding “leaders’ perceptions, thoughts, beliefs, and enabling us to make inferences on trends in and expectations” of feedback [19]. Coders organized the facility-level mental models. Interviews were conducted flagged passages by role and site, then compared the or- between May 2011 and November 2012. ganized passages, looking for common and contrasting topics across roles within a site. Topics across roles were Clinical performance change over time then iteratively synthesized by the research team into a Sites were originally selected based on their performance facility-level mental model of feedback derived from the profile and membership in one of four performance cat- corpus of coding(s), resulting in a 1-page summary per egories. However, the sites did not necessarily remain in site describing the mental models observed at the Hysong et al. Implementation Science (2018) 13:73 Page 5 of 11 Fig. 1 Mean clinical performance scores and standard deviations for all VA Medical Centers in 2007–2008 vs. 2011–2012. Note: colored points represent four performance categories of 16 sites used in this study: red = low, yellow = highly variable, blue = consistently moderate, green = high. In both graphs, the colors represent the category assignments the sites received in 2008 to show the extent to which their relative positions may have changed in 2012 facility. A sample site summary and individual responses observed mental models (see the “Results” section, that led to that site summary are presented in below). Coders were blind to the sites’ performance Additional file 3. categories throughout the coding and analysis process. Using the 1-page site-level summaries as data After completing the aforementioned analyses, coders sources, we identified common themes across sites, were un-blinded to examine the pattern of these derived from the data; the themes were organized mental model characteristics by site and clinical per- into three emergent dimensions characteristic of the formance cluster. Hysong et al. Implementation Science (2018) 13:73 Page 6 of 11 Data from four sites (one from each clinical perform- performance categorizations. Separate regressions were ance category) were not usable after initial coding. Rea- conducted for 2008 and 2012 performance categories. sons for this include insufficient (e.g., a single interview) or unreliable (e.g., interviews from individuals whose Results tenure was too short to develop a reliable institutional Mental models identified mental model) to infer facility level mental models. Various mental models emerged across the 12 sites re- Thus, our final dataset comprised 12 sites which were garding feedback. Therefore, we opted to categorize further coded by the lead author in terms of the facility’s mental models into either positive or negative mental mental model positivity, negativity, or mixed perceptions models of the utility and value of EPRP as a source of of feedback. Excluded sites appear in gray italics in clinical performance feedback. This organizing strategy Table 1. enabled categorization for all but two facilities, which exhibited mixed mental models in terms of sign. Nega- Relating mental models of feedback to clinical performance tive mental models focused on the quality and the con- As mentioned earlier, due to the change in sites’ per- sequences of the EPRP system and were more prevalent formance profiles in the period between site selection than the positive mental models, which depicted EPRP and data collection, we sought to identify clusters of as a means to an end. We present and describe the types sites with similar mental models to explore whether said of mental models emergent from the data in the subse- clusters differed in their clinical performance. To accom- quent sections. plish this, we first sought to identify an organizing framework for the 12 site mental models identified as Negative mental models: EPRP does not reflect actual described in the previous section. The research team care quality identified three dimensions along which the 12 facilities’ The mental model observed across the greatest number emergent mental models could be organized: sign of sites in the interviews was that EPRP was not an ac- (positive vs. negative) perceived feedback intensity curate reflection of actual quality of care (and by exten- (the degree to which respondents described receiving sion was not a good source of feedback), a theme raised more or more detailed feedback in any given in- at 6 of the 12 sites coded but displaying the highest stance) and perceived feedback consistency (the degree levels of groundedness at sites C, K, and M. This mental to which respondents described receiving feedback at model was attributable to respondent perceptions that regular intervals). the data were not accurate because of the chart review With respect to mental model sign, facility mental and sampling approach (e.g., sites C, K, and M, quoted models were classified as positive if it could be deter- below). mined or inferred from the mental model that the facil- ity perceived EPRP or other clinical performance data as I: Is there anything they could be doing to improve a useful or valuable source of feedback; they were classi- the way you all view EPRP …? fied as negative if it could be determined or inferred from the mental model that the facility did not find util- P: Um well uh yeah just being more accurate. I mean ity or value in EPRP or similar clinical performance data the fact that we can um challenge the- the- the as a feedback source. reports and that we do have a pretty decent Facility-level mental models were classified as being proportion that- that are challenged successfully. … high in perceived feedback intensity if participants re- the registries that we’re building I really think are ported receiving more frequent or more detailed per- going to be more accurate than either one because formance feedback and were classified as low in there’s a look at our entire panel of patients whether perceived feedback intensity if participants discussed the or not they showed up. … it does seem a little receipt of feedback as being particularly infrequent or backwards these days for someone to be coming not very detailed. through and looking at things by hand especially Facility mental models were classified as high in per- when it doesn’t seem like they’re being too accurate. ceived feedback consistency if participants reported re- ceiving feedback at regular, consistent intervals and low – Primary Care Chief, Site C in perceived feedback consistency if participants re- ported believing that they received feedback at irregular I mean the EPRP pulls from what I understand, I intervals. mean sometimes we sorta see that but it’s pretty Once classified along these dimensions, we conducted abstract data. It’s not really about an individual panel. a logistic regression to test whether categorization It’s, it’s based on very few patients each month and among these three levels predicted the sites’ clinical you know, now I guess it’s pulled at the end of the Hysong et al. Implementation Science (2018) 13:73 Page 7 of 11 year but in terms of really accessing that as a provider, community aspect or the social workers too. …That’s um, uh that doesn’t happen much. the way that I see EPRP. EPRP is only you try to just set up several variables that you can measure that at – Physician, Site K the end of the day will tell you, you know what, we are taking a holistic approach to this patient and we Being able to remove ignorance and make that data are achieving what we think is best in order to keep available in a user-friendly way is paramount and im- this patient as healthy as possible. portant and unfortunately from my perspective; our systems and data systems. We are one data-rich – Primary Care Chief, Site H organization. We’ve got data out the ying-yang but our ability to effectively analyze it, capture it and roll – Physician, Site L, it out in a meaningful way leaves a lot to be desired; – Facility Director, Site M Mixed mental model: EPRP is not accurate, but it helps us improve nonetheless Of note, site L exhibited an interesting mix. Like the Positive mental models: EPRP is a means to an end sites with the more negative mental models, they re- (transparency, benchmarking, and strategic alignment) ported concerns with the quality of the data being pre- We also observed an almost equally prevalent, positive sented to them. However, this site was unique in that, mental model at a different set of sites (n = 4), where in- despite the limitations of the data, respondents at this terviewees viewed the EPRP as a means to an end (im- site nonetheless use the data for benchmarking purposes proving quality of care). The specific manifestation of because, if the targets are met it is an indication to them that end varied from site to site. For example, at one site, that a certain minimum level of quality has been reached EPRP was a way to improve transparency: at the sites. We try to be totally transparent. Uh sometimes to the I’d start worrying and looking at why or what am I point of uh being so transparent people can see doing that’s causing it to be like this. Is it the way they everything … and sometimes they may not sound pull the data? Because it’s random. It’s not every single good but if you consistently do it I think you know patient that is recorded. Um, they pull, I believe people understand that. anywhere from 5 to 10 of your charts. … So I do ask that and then if it is a true accounting then I go “OK – Facility Director, Site B then it’s me. It’s gotta be me and my team.” I look at what my team is doing or what portion of that, that At another site, EPRP was perceived as a tool for stra- performance is performed by my staff and what tegic alignment: portion of it is by me. And then from there I go OK. Then I weed it out. I think the VA is- is um I think they are wise in connecting what they feel are important clinical indicators with the overall performance measurement Unintended negative consequences: EPRP is making and the performance evaluation of the director so that clinicians hypervigilant the goals can be aligned from the clinical staff to the In addition to the mental models described above, a sec- administrative staff and we’ve been very fortunate. ond, negative mental model was observed at a single site We’ve gotten a lot of support [latitude in how to best about the unintended consequences of feeding back align] here. EPRP data—clinicians reported a feeling of hypervigi- lance as a result of the increased emphasis on the per- – Primary Care Chief, Site A formance measures in the EPRP system: At other sites, EPRP was seen as a clear tool for There’s just a lot of pressure, uh you know, that, uh it benchmarking purposes: seems like leaders are under and you know, I’mina position where I try to buffer some of that so that my You benchmark what your model or your goal or providers don’t feel the pressure but when your best standards of care are, and that is basically performance measures and, um other, uh things are on the spectrum that embodies the whole patient, all being looked at, um and it comes down, uh you know, the way from the psychological aspect to the through our, our electronic medium almost instantly, Hysong et al. Implementation Science (2018) 13:73 Page 8 of 11 um it’s, uh you know, looking for responses, why did and feedback and their facility-level clinical performance. this happen, it’s very hard for people to feel, feel Interestingly, one area of consistency across all facilities comfortable about anything. is that, without prompting, mental models of EPRP re- ported by interviewees included some component of the – Primary Care Chief, Site F quality of the data. These findings are consistent with previous research noting considerable variability in individual and shared Relating mental models of feedback to clinical mental models of clinical practice guidelines [3]. Add- performance itionally, the finding that the two sites with neutral men- Table 3 presents the 12 sites, a summary of their re- tal models and by extension, low intensity of feedback spective mental models, and their classification accord- exhibited low clinical performance is consistent with ing to the three dimensions in 2008. No significant previous research indicating feedback intensity may association was observed between performance (in 2008 moderate feedback effectiveness [4]. Contrary to previ- or 2012) that can either affect intensity or consistency ous research, however, no particular mental model of when tested via logistic regression and analysis of vari- feedback was found to be associated with better clinical ance with sites as between subject factors, though we ac- performance. Two possible reasons for this include the knowledge 12 is a rather small sample size for this type conditions under which information is processed (e.g., of test. However, we did observe two noteworthy, de- fuzzy trace theory suggests affect impacts information scriptive patterns: first, none of the sites’ mental models processing and memory recall; [20, 21]), and the local were highly positive; in other words, at best, sites per- adaptation for feedback purposes of a standardized, na- ceived EPRP performance measure-based feedback to be tional program such as EPRP. a means to an end (transparency, benchmarking, or stra- Another possible reason for the lack of relationship be- tegic alignment), rather than a highly valued, integral tween clinical performance and mental models could con- best practice for delivering high-quality care. Second, cern source credibility. The strongest negative mental 75% of sites exhibited moderate or low levels of feedback model indicated providers simply did not perceive EPRP intensity. to be a credible source of clinical performance assessment. Our final analyses examined descriptively whether sites A recent model of physician acceptance of feedback [22] with positive vs. negative vs. mixed mental models ex- posits that low feedback source credibility is more likely hibited common patterns of clinical performance. to be associated with negative emotions such as apathy, ir- Among sites with negative mental models, all four clin- ritation, or resentment; however, depending on other fac- ical performance categories from 2008 were represented; tors, this emotional reaction can lead either to no action clinical performance also varied considerably among (no behavior modification) or counterproductive actions these sites in 2012, though no high performers were ob- (e.g., defending low performance rather than changing it); served among these sites in 2012. Among sites with the model does not specify what factors may compel positive mental models, the low clinical performance someone to adopt one pathway (e.g., inaction) vs. another category was the only category not represented in 2008, (e.g., action resulting in negative outcomes). Although half whereas all levels of clinical performance were repre- of the sites reported having a mental model consistent sented in 2012. Sites with mixed or neutral mental with low source credibility, our data did not capture each models exhibited mostly low performance during both site’s specific pathway from emotional reaction to behav- time periods (site L, who exhibited highly variable per- ioral outcome and impact. It is therefore possible that un- formance in 2008, was the lowest performer in the specified factors outside our data could have been the highly variable category). Of note, sites L and J were also deciding factor, thus explaining the variability in clinical the only two sites with low intensity of feedback. performance, even within sites with a strong negative mental model of EPRP as a source of feedback. Discussion This study aimed to identify mental models of audit and Implications feedback exhibited by outpatient facilities with varying Our study demonstrates the need to extend theory- and levels of clinical performance on a set of chronic and evidence-based approaches to feedback design and im- preventive care clinical performance measures. Consist- plementation to foster receptive mental models toward ent with the highly individualized cultures and modi feedback [5]. Implementing effective feedback practices operandi of individual VA facilities across the country, may elicit receptiveness to the feedback systems in place, we found high variability in facilities’ mental models of allowing healthcare organizations to reap greater gains audit and feedback, and no clear relationship between from performance management systems that cost mil- variability among these facilities’ mental models of audit lions to sustain. One noteworthy implication of this Hysong et al. Implementation Science (2018) 13:73 Page 9 of 11 Table 3 Summary of sites’ mental models and degree of positivity, intensity, and consistency Performance Site Performance Mental model summary Sign/theme Intensity Consistency category (2007–2008) category (2011–2012) High performers B Moderate Aim: EPRP feedback is communicated, Positive: transparency Medium High tracked, and improved upon in a ubiquitous, transparent, non-punitive, systematic, and consistent way. H High EPRP as a benchmark or model for the Positive: benchmarking Medium Low best standards of care for keeping the whole patient as healthy as possible. M Moderately high EPRP measures are generally OK, but Negative: EPRP not a good Medium High not sophisticated enough to be representation of quality reflect actual care quality Consistently D Moderate EPRP serves as a primary means of Positive: strategic alignment Medium Medium moderate linking the work/efforts of all facility staff to the facility’s mission: to provide the best quality of care that veterans expect and deserve K Moderate EPRP is not a real true reflection of the Negative: EPRP not a good Medium Medium quality of one’s practice because of representation of quality the sample size at a particular time period. E Moderately high Clinicians think EPRP is inferior to their Negative: EPRP not a good Medium Medium population-based, VISN created representation of quality dashboard, and leaders have concerns about overuse and misinterpretations or misuse of EPRP Highly variable A Low EPRP remains relevant as a starting point Positive: strategic alignment Medium High for setting, aligning, and monitoring clinical performance goals G Moderately low EPRP as an “outside checks and balance” Mixed Medium Medium system that validates whether or not how the facility thinks they are doing (e.g., good job) and what challenge areas they have are accurate, however, there are no real or punitive consequences to scoring low. L Low Although EPRP does not reflect actual care Mixed Low High quality, the numbers indicate that they are doing something consistently right that helps their patients Low performers F Moderate Immediate feedback is advantageous to Negative: EPRP has made us hyper- High Medium memory, but not always well received. vigilant C Low EPRP is viewed by some as an objective, Negative: EPRP not a good Medium Low unbiased measure with some sampling representation of quality limitations; by others, EPRP is viewed as inaccurate or retrospective J Low Site struggles to provide feedback; No feedback until PACT Low Medium clinicians did not receive EPRP and PMs until the PACT implementation. Note: 2012 performance categories differ from 2008 because 2012 performance forms a continuum rather than discreet categories (see Fig. 1 and main text for details) study is that positive mental models regarding feedback quality improvement initiative was implemented: VA’s hinge partially on the feedback system’s ability to deliver nationwide transition from traditional primary care clinical performance data in ways the various users can clinics to Patient Aligned Care Teams (VA’s implementa- understand quickly and act upon personally (that is, tion of the Primary Care Medical Home model) oc- feedback can be translated into an actionable plan for in- curred in 2010. This may in part account for the dividual improvement [4, 23]). The strongest and most absence of an observable relationship between mental common mental model observed was that EPRP was not models and clinical performance. Team-based care in- a good representation of care quality—if the feedback re- volves more complex coordination among clinical staff cipients question the credibility of the feedback, it is un- and involves new roles, responsibilities, and relationships likely to effectively change performance. among existing clinical personnel, which may in turn tax clinicians enrolled in the current study [24]. Further, cli- Limitations nicians and divisions of health service may require time As with all research, this study had limitations. While to adjust to their new roles, responsibilities, relation- data were being collected for this study, a seminal ships, and changes in workflow. We believe this is Hysong et al. Implementation Science (2018) 13:73 Page 10 of 11 illustrated in our data as the mean performance score Funding The research reported here was supported by the US Department of across all sampled facilities dips significantly from 2008 to Veterans Affairs Health Services Research and Development Service (grant 2012 (2008: mean = .935, 95% CI = .004; 2012: mean nos. IIR 09–095, CD2-07-0181) and partly supported by the facilities and re- = .913, 95% CI = .004; p < .001), variability in performance sources of the Houston VA HSR&D Center of Excellence (COE) (CIN 13-413). All authors’ salaries were supported in part by the Department of Veterans across measures is visibly greater for all facilities observed Affairs. (2008: mean stdev = .05, 95% CI = .004; 2012: mean stdev =.07, 95% CI =.003; p <.001), and our sites’ standings in Availability of data and materials The datasets generated and/or analyzed during the current study are not the distribution materially changed (see Fig. 1 for all of publicly available as they are individual interviews that may potentially these patterns). It is therefore possible this initiative was a identify participants and compromise confidentiality, but are available from sufficiently large system change to possibly overwhelm the corresponding author on reasonable request. any potentially detectable relationship between mental Authors’ contributions models of feedback and clinical performance. SJH conceptualized the idea, secured the grant funding, analyzed the data, Second, we were unable to use all the data we col- and is responsible for the principal manuscript writing, material edits to the lected due to the limited number of interviews at some manuscript, and material contributions to the interpretation. KS and RS contributed to the data collection, data analysis, material edits to the sites (n = 1; see data analysis section for more details); manuscript, and material contributions to the interpretation. AA contributed however, all four performance categories contained one to the data analysis and material edits to the manuscript. AH contributed to site with unusable data, thereby minimizing any risk of the material edits to the manuscript and material contributions to the interpretation. PH contributed to the data analysis, material edits to the bias toward a given performance category. manuscript, and material contributions to the interpretation. All authors read and approved the final manuscript. Conclusions Authors’ information Despite a national, standardized system of clinical per- Contributions by Drs. Paul Haidet and Ashley Hughes, currently at Penn State formance measurement and tools for receiving clinical University College of Medicine and University of Illinois College of Applied performance information, mental models of clinical per- Health Sciences, respectively, were made during their tenure at the Michael E. DeBakey VA Medical Center and Baylor College of Medicine. The views formance feedback vary widely across facilities. Cur- expressed in this article are solely those of the authors and do not rently, evidence supports a strong negative mental necessarily reflect the position or policy of the authors’ affiliate institutions, model toward feedback likely driven by source credibility the Department of Veterans Affairs, or the US government. concerns, suggesting that EPRP measures may be of lim- Ethics approval and consent to participate ited value as a source of clinical audit and feedback to Our protocol was reviewed and approved by the Internal Review Board (IRB) clinicians. Although our data are at this time inconclu- at Baylor College of Medicine (protocol approval #H-20386). sive with respect to the relationship of these mental Competing interests model differences to performance, the variability in The authors declare that they have no competing interests. mental models suggests variability in how feedback sys- tems relying on EPRP were implemented across sites; fu- Publisher’sNote ture work should concentrate on identifying Springer Nature remains neutral with regard to jurisdictional claims in implementation factors and practices (such as establish- published maps and institutional affiliations. ing the credibility of clinical performance measures in Author details the eyes of clinicians) that lead to strong, productive 1 Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA. mental models of the innovation being implemented. Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX 77030, USA. VISN 4 Center for Evaluation of PACT (CEPACT), Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA. Department of Biomedical and Endnotes Health Information Sciences, University of Illinois at Chicago, Chicago, IL, 5 6 Defined as the average of all clinical performance USA. Penn State University College of Medicine, Hershey, PA, USA. Center for Innovations in Quality Effectiveness and Safety, Houston, Texas, USA. measure scores used in the performance profile for a given facility in a given performance period [13]. Received: 17 November 2017 Accepted: 17 May 2018 Additional files References 1. America CoQoHCi. Crossing the quality chasm: a new health care system for Additional file 1: COREQ (COnsolidated criteria for REporting Qualitative the 21st century. Washington, DC: National Academies Press; 2001. research) checklist. (PDF 489 kb) 2. America CoQoHCi. Rewarding provider performance: aligning incentives in Additional file 2: Interview guide. (DOCX 26 kb) Medicare. Washington DC: National Academies Press; 2006. 3. Hysong SJ, Best RG, Pugh JA, Moore FI. Not of one mind: mental models of Additional file 3: Example site summary. (DOCX 19 kb) clinical practice guidelines in the Veterans Health Administration. Health Serv Res. 2005;40(3):829–47. Acknowledgements 4. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, We would like to thank El Johnson, Gretchen Gribble, and Thach Tran for O'Brien MA, Johansen M, Grimshaw J, Oxman AD. Audit and feedback: their previous contributions to data collection and analysis (EJ and GG for effects on professional practice and healthcare outcomes. Cochrane data collection assistance, TT for data analysis assistance) this project. Database Syst Rev. 2012;6: 1–229 Hysong et al. Implementation Science (2018) 13:73 Page 11 of 11 5. Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, Grimshaw JM. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9:14. 6. Kraiger K, Wenzel LH. Conceptual development and empirical evaluation of measures of shared mental models as indicators of team effectiveness. In: Brannick MT, Salas E, Prince E, editors. Team performance assessment and measurement: theory, methods and applications. Mahwah, NJ: Lawrence Erlbaum; 1997. p. 63–84. 7. Johnson-Laird PN, Girotto V, Legrenzi P. Mental models: a gentle guide for outsiders. Sistemi Intelligenti. 1998;9(68):33. 8. Reyna VF, Brainerd CJ. Fuzzy-trace theory and framing effects in choice: gist extraction, truncation, and conversion. J Behav Decis Mak. 1991;4(4):249–62. 9. Reyna VF. A theory of medical decision making and health: fuzzy trace theory. Med Decis Mak. 2008;28(6):850–65. 10. Cooke NJ, Gorman JC. Assessment of team cognition. In: Turner CW, Lewis JR, Nielsen J, editors. International encyclopedia fo ergonomics and human factors. Boca Raton, FL: CRC Press; 2006. p. 270–5. 11. DeChurch LA, Mesmer-Magnus JR. The cognitive underpinnings of effective teamwork: a meta-analysis. J Appl Psychol. 2010;95(1):32–53. 12. Schmidt RA, Bjork RA. New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychol Sci. 1992;3(4):207–17. 13. Hysong SJ, Teal CR, Khan MJ, Haidet P. Improving quality of care through improved audit and feedback. Implement Sci. 2012;7:45–54. 14. Hysong SJ, Best RG, Pugh JA. Audit and feedback and clinical practice guideline adherence: making feedback actionable. Implement Sci. 2006;1(1): 15. Strauss AL, Corbin J. Basics of qualitative research : techniques and procedures for developing grounded theory, 2nd edn. Thousand Oaks, CA: Sage Publications; 1998. 16. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88. 17. Weber RP. Basic content analysis, vol. 49. 2nd ed. Newbury Park: Sage; 1990. 18. Atlas.ti, vol. 2010. 6.2 ed. Berlin: Scientific Software Development. 19. Klimoski R, Mohammed S. Team mental model: construct or metaphor? J Manag. 1994;20(2):403–37. 20. Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn Individ Differ. 1995;7(1):1–75. 21. Brainerd CJ, Reyna VF. Fuzzy-trace theory and false memory. Curr Dir Psychol Sci. 2002;11(5):164–9. 22. Payne VL, Hysong SJ. Model depicting aspects of audit and feedback that impact physicians’ acceptance of clinical performance feedback. BMC Health Serv Res. 2016;16(1):260. 23. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O'Brien MA, French SD, Young J, Odgaard-Jensen J. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41. 24. Hysong SJ, Knox MK, Haidet P. Examining clinical performance feedback in patient-aligned care teams. J Gen Intern Med. 2014;29:667–74. 25. Margaret M. Byrne, Christina N. Daw, Harlan A. Nelson, Tracy H. Urech, Kenneth Pietz, Laura A. Petersen. Method to Develop Health Care Peer Groups for Quality and Financial Comparisons Across Hospitals. Health Serv Res. 2009; 44 (2p1):577-592 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Implementation Science Springer Journals

Mental models of audit and feedback in primary care settings

Free
11 pages

Loading next page...
 
/lp/springer_journal/mental-models-of-audit-and-feedback-in-primary-care-settings-1bMdM3vorW
Publisher
Springer Journals
Copyright
Copyright © 2018 by The Author(s).
Subject
Medicine & Public Health; Health Promotion and Disease Prevention; Health Administration; Health Informatics; Public Health
eISSN
1748-5908
D.O.I.
10.1186/s13012-018-0764-3
Publisher site
See Article on Publisher Site

Abstract

Background: Audit and feedback has been shown to be instrumental in improving quality of care, particularly in outpatient settings. The mental model individuals and organizations hold regarding audit and feedback can moderate its effectiveness, yet this has received limited study in the quality improvement literature. In this study we sought to uncover patterns in mental models of current feedback practices within high- and low-performing healthcare facilities. Methods: We purposively sampled 16 geographically dispersed VA hospitals based on high and low performance on a set of chronic and preventive care measures. We interviewed up to 4 personnel from each location (n = 48) to determine the facility’s receptivity to audit and feedback practices. Interview transcripts were analyzed via content and framework analysis to identify emergent themes. Results: We found high variability in the mental models of audit and feedback, which we organized into positive and negative themes. We were unable to associate mental models of audit and feedback with clinical performance due to high variance in facility performance over time. Positive mental models exhibit perceived utility of audit and feedback practices in improving performance; whereas, negative mental models did not. Conclusions: Results speak to the variability of mental models of feedback, highlighting how facilities perceive current audit and feedback practices. Findings are consistent with prior research in that variability in feedback mental models is associated with lower performance.; Future research should seek to empirically link mental models revealed in this paper to high and low levels of clinical performance. Keywords: Barriers and facilitators for change, Organizational implementation strategies, Research policy, Research funding Background performed multiple times, providing the feedback recipi- The Institute of Medicine (IOM) strongly advocates the ent multiple opportunities to address and change the be- use of performance measures as a critical step toward havior in question. improving quality of care [1, 2]. Part of the mechanism However, a recent Cochrane review concluded that through which performance measures improve quality of audit-and-feedback’s effectiveness is highly variable, de- care is as a source of feedback for both individual pending on factors such as who provides the feedback, the healthcare providers and healthcare organizations [3]. format in which the feedback is provided, and whether Audit and feedback is particularly suitable in primary goals or action plans are included as part of the feedback care settings, where a higher incidence of chronic condi- [4, 5]. Related audit-and-feedback work recommended a tions is managed, and thus, the same set of tasks is moratorium on trials comparing audit and feedback to usual care, advocating instead for studies that examine mechanisms of action, which may help determine how to * Correspondence: hysong@bcm.edu; sylvia.hysong@va.gov; http://www.hsrd. optimize feedback for maximum effect [5]. houston.med.va.gov The concept of mental models may be an important Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA factor modifying the effect of audit and feedback on Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX 77030, USA Full list of author information is available at the end of the article © The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Hysong et al. Implementation Science (2018) 13:73 Page 2 of 11 organizational behaviors and outcomes. Mental models with high versus low-performing outpatient facilities (as are cognitive representations of concepts or phenomena. measured by a set of chronic and preventive outpatient Although individuals can form mental models about any care clinical performance measures). concept or phenomena, all mental models share several characteristics: (a) they are based on a person’s (or Methods group’s) belief of the truth, not necessarily on the truth Design itself (i.e., mental models of a phenomenon can be in- This research consists of qualitative, content, and frame- accurate), (b) mental models are simpler than the work analyses of telephone interviews with primary care phenomenon they represent, as they are often heuristic- personnel and facility leadership at 16 US Department of ally based, (c) they are composed of knowledge, behav- Veterans Affairs (VA) Medical Centers, employing a iors, and attitudes, and (d) they are formed from cross-sectional design with purposive, key-informant sam- interactions with the environment and other people [6, pling guided by preliminary analyses of clinical perform- 7] People may form mental models about any concept ance data. Methods for this work have been described or phenomenon through processing information, extensively elsewhere in this journal [13] and are summa- whether accurately or through the “gist,” see [8, 9]; men- rized herein. The study protocol was reviewed and ap- tal models are thought to form the basis of reasoning proved by the Institutional Review Board at Baylor College and have been shown in other fields of research to of Medicine. We relied on the Consolidated Criteria for influence behavior (e.g., shared mental models in Reporting Qualitative Research (COREQ) guidelines for teams positively influences team performance when reporting the results herein (see Additional file 1). mental models are accurate and consistent within the team [10, 11]. For example, research outside of Research team and reflexivity healthcare suggests that feedback can enhance gains Interviews were conducted by research staff (Kristen Smi- from training and education programs [12], such that tham, master’s level industrial/organizational psychologist, learners form mental models that are more accurate Melissa Knox, registered dietitian; and Richard SoRelle, and positive than when feedback is inadequate or not bachelor’s in sociology) trained specifically for this study delivered. by the research investigators (Drs. Sylvia Hysong, PhD However, it is important to note that mental models can and Paul Haidet, MD). Dr. Hysong (female) is an indus- manifest at various levels within an organization (i.e., per- trial/organizational psychologist; Dr. Haidet (male) is a son, team, organization), such as those within a primary general internist; both researchers are experienced in con- care clinic. A facility, hospital, or organization-level men- ducting, facilitating, and training personnel (see Hysong et tal model represents the beliefs and perceptions of an al. [13] in this journal for details of interviewer training organization such that these influences are felt by individ- protocol) and were research investigators at the time the ual members of the organization. Research on clinical study was conducted. The research team also received practice guideline (CPG) implementation has established additional training on qualitative analysis using Atlas.ti empirical links between a hospital’s mental model of (the analytic software used in this project) specifically tai- guidelines and their subsequent success at guideline im- lored for this study from a professional consulting firm plementation: specifically, compared to facilities who specializing in qualitative analysis. The research team had struggled with CPG implementation, facilities who were no prior relationship with any of the interviewees prior to more successful at CPG implementation exhibited a clear, study commencement. focused mental model of guidelines, and a tendency to use feedback as a source of learning [3]. However, as the Participants aforementioned study was not designed a priori to study We interviewed up to four participants (total n =48) at audit-and-feedback, it lacked detail regarding the facilities’ each of 16 geographically dispersed VA Medical Centers, mental models about the utility of audit-and-feedback, as key informants of the mental models and culture of which could have explained why some facilities were their respective facilities. Participants were drawn from more likely than others to add audit-and-feedback to the following groups: the facility director, the chief of their arsenal of implementation tools. The present the primary care service, full-time primary care physi- study directly addresses this gap in order to better cians and physician extenders, and full-time primary shed light on the link between feedback and health- care nurses. We sought one interviewee per role cat- care facility effectiveness. egory and sought to interview clinicians with at least 3 years in their current position to ensure they would Study objective have sufficient organizational experience to form more This study aimed to identify facility-level mental models complete mental models. Table 1 summarizes which about the utility of clinical audit and feedback associated roles were interviewed at each facility. 5/16 facility Hysong et al. Implementation Science (2018) 13:73 Page 3 of 11 directors and 3/16 primary care chiefs declined to par- specific measures used to select the 16 sites in the study, ticipate (75% response rate for facility leaders); securing which fell into one of four categories: high performers 24 clinician interviews (12 MD, 12 RN) required 104 in- (the four sites with highest average performance across vitations [13]. measures), low performers (the four with lowest average performance across measures), consistently moderate Site selection performers (the four with moderate average performance Sites were selected using a purposive stratified approach and the lowest variability across measures), and highly based on their scores on a profile of outpatient clinical variable facilities (i.e., the four with moderate average performance measures from 2007 to 2008 extracted performance and the highest variability across mea- from VA’s External Peer Review Program (EPRP) one of sures). Our study protocol, published earlier in this jour- VA’s data sources for monitoring clinical performance nal [13], describes the method of calculating the used by VA leadership to prioritize the quality areas performance categories in greater detail. Table 1 summa- needing most attention. EPRP is “a random chart ab- rizes basic site characteristics grouped by performance straction process conducted by an external contractor to category. audit performance at all VA facilities on numerous qual- ity of care indicators, including those related to compli- Procedure ance with clinical practice guidelines” [14]. The program Participants (key informants) were invited to enroll in tracks over 90 measures along multiple domains of the study initially by e-mail followed by phone calls if value, including quality of care for chronic conditions e-mail contact was not successful. Participants were usually treated in outpatient settings such as diabetes, interviewed individually once for 1 h via telephone by a depression, tobacco use cessation, ischemic heart dis- trained research team member at a mutually agreed ease, cardiopulmonary disease, and hypertension. upon time; interviews were audio-recorded with the par- For site selection, we focused on metrics of chronic ticipant’s consent (four participants agreed to participate and preventative care, as patients may return to the pro- but declined to be audio recorded. In these cases, a sec- vider for follow-up care (as opposed to an urgent care ond research team member was present to take typed, clinic or emergency department). Table 2 displays the detailed notes during the interview). Only participants Table 1 Site characteristics and roles interviewed at each site Performance category Site Size (# Residents Number Interviewee role of per 10k of FD PCC MD RN unique patients† primary patients) care personnel High performers B 27,222 0.00 35 ✓✓ ✓ ✓ H 27,851 8.62 62 ✓✓ M 43,845 18.25 56 ✓✓ R 49,813 31.42 83 ✓ Consistently moderate D 44,022 26.18 115 ✓✓ ✓ ✓ E 63,313 10.63 94 ✓✓ ✓ ✓ K 46,373 56.93 125 ✓✓ ✓ P 80,022 21.45 54 ✓✓ Highly variable A 60,528 23.15 143 ✓✓ ✓ ✓ G 49,309 26.24 27 ✓✓ ✓ L 21,327 7.03 30 ✓✓ Q 39,820 2.89 10 ✓✓ Low performers C 44,391 27.51 88 ✓✓ ✓ ✓ F 19,609 0.00 46 ✓✓ ✓ ✓ J 58,630 24.94 116 ✓✓ ✓ ✓ N 24,795 0.00 23 ✓✓ ✓ Note: sites listed in italic type were sites excluded from the study due to insufficient data (either insufficient number of interviews or insufficient information about mental models was provided by the interviewees of a site during the interviews, thus making any findings from that site unstable). FD facility director, PCC primary care chief, MD physician, RN registered nurse. Number of residents per 10k patients is intended as a measure of the strength of the academic mission of the facility, which has been shown to be a nuanced indicator than the dichotomous medical school affiliation measure used traditional [25] Hysong et al. Implementation Science (2018) 13:73 Page 4 of 11 Table 2 Clinical performance measures employed in site their original performance categories throughout the life selection of the study. Figure 1 presents the average performance EPRP mnemonic Short description scores of all available medical centers and their respective standard deviations. The colored data points represent the c7n DM-outpatient-foot sensory exam using monofilament sites selected for the study; the respective colors represent Dmg23 DM-outpatient-HbA1 > 9 or their performance arms as determined for 2007–2008 (see not done (poor control) in the past year Fig. 1 note). Performance for the rest of the medical cen- Dmg28 DM-outpatient-BP > =160/100 or not done ters is depicted by the black data points and is presented purely to show the participating sites’ relative standing. As Dmg31h DM-outpatient-retinal exam, timely by disease (HEDIS) shown in the upper graph in the figure, the sites clustered Dmg7n DM-outpatient-LDL-C < 120 cleanly into the four desired arms in 2007–2008. However, as shown in the lower graph, the same sites shifted posi- htn10 HTN-outpatient-Dx HTN and BP > = 160/100 or not recorded tions in 2011–2012; for example, site C, which was origin- ally a low performer in 2008, is the third highest htn9 HTN-outpatient-Dx HTN and BP < = 140/90 performer of the participating sites in 2011–2012, more p1 Immunizations-pneumococcal importantly, the sites are spread throughout the con- outpatient-nexus tinuum of performance, rather than forming clean, dis- p22 Immunizations-outpatient-influenza creet clusters as they did in 2008. Consequently, our ages 50–64- original plan to make inferences about the similarities in p3h CA-women aged 50–69 screened mental models based on the original performance clusters for breast cancer was no longer viable nor was it possible to re-categorize p4h CA-women aged 21–64 screened the sites in 2011–2012 for the same purpose. We there- for cervical cancer in the past 3 years fore adopted an alternate analytic approach to our re- p6h CA-patients receiving appropriate search question; rather than explain differences in mental colorectal cancer screening (HEDIS) models among sites of known clinical performance, we smg2n Tobacco-outpatient-used in the past sought to explore differences in clinical performance 12 months-nexus-non-MH among sites with similar mental models. smg6 Tobacco-outpatient-intervention-annual-non-MH with referral and counseling Data analysis smg7 Tobacco-outpatient-meds Identifying mental models for each site offered-nexus-non-MH Interview recordings were transcribed and analyzed Used with permission from Hysong, Teal, Khan, and Haidet [9] using techniques adapted from framework-based analysis and researchers were present during the interviews. Key [15] and content analysis [16, 17] using Atlas.ti 6.2 [18]. informants answered questions about (a) the types of In the instances where no audio recording was available EPRP information the facilities receive, (b) the types of (n = 4), interviewer field notes served as the data source quality/clinical performance information they actively for analysis. seek out, (c) opinions and attitudes about the utility of Open coding began with one of the four research team EPRP data (with specific emphasis on the role of meet- coders reviewing the transcript, identifying and tagging ing performance objectives within the facility), (d) how text passages indicative of facilities’ mental models of they use the information they receive and/or seek out feedback within the individual interviews using an a within the facility, and (e) any additional sources of in- priori code list (e.g., positive/negative, concerns of trust, formation or strategies they might use to improve the fa- or credibility of feedback) to which they could add emer- cility performance (see Additional file 2 for interview gent codes as needed; coders proceeded with axial cod- guide). The participants sampled were selected as key ing using a constant comparative approach [13]. For the leaders and stakeholders at the facility, making them purposes of coding, we defined mental models as mental ideally suited for responding to facility-level questions states regarding “leaders’ perceptions, thoughts, beliefs, and enabling us to make inferences on trends in and expectations” of feedback [19]. Coders organized the facility-level mental models. Interviews were conducted flagged passages by role and site, then compared the or- between May 2011 and November 2012. ganized passages, looking for common and contrasting topics across roles within a site. Topics across roles were Clinical performance change over time then iteratively synthesized by the research team into a Sites were originally selected based on their performance facility-level mental model of feedback derived from the profile and membership in one of four performance cat- corpus of coding(s), resulting in a 1-page summary per egories. However, the sites did not necessarily remain in site describing the mental models observed at the Hysong et al. Implementation Science (2018) 13:73 Page 5 of 11 Fig. 1 Mean clinical performance scores and standard deviations for all VA Medical Centers in 2007–2008 vs. 2011–2012. Note: colored points represent four performance categories of 16 sites used in this study: red = low, yellow = highly variable, blue = consistently moderate, green = high. In both graphs, the colors represent the category assignments the sites received in 2008 to show the extent to which their relative positions may have changed in 2012 facility. A sample site summary and individual responses observed mental models (see the “Results” section, that led to that site summary are presented in below). Coders were blind to the sites’ performance Additional file 3. categories throughout the coding and analysis process. Using the 1-page site-level summaries as data After completing the aforementioned analyses, coders sources, we identified common themes across sites, were un-blinded to examine the pattern of these derived from the data; the themes were organized mental model characteristics by site and clinical per- into three emergent dimensions characteristic of the formance cluster. Hysong et al. Implementation Science (2018) 13:73 Page 6 of 11 Data from four sites (one from each clinical perform- performance categorizations. Separate regressions were ance category) were not usable after initial coding. Rea- conducted for 2008 and 2012 performance categories. sons for this include insufficient (e.g., a single interview) or unreliable (e.g., interviews from individuals whose Results tenure was too short to develop a reliable institutional Mental models identified mental model) to infer facility level mental models. Various mental models emerged across the 12 sites re- Thus, our final dataset comprised 12 sites which were garding feedback. Therefore, we opted to categorize further coded by the lead author in terms of the facility’s mental models into either positive or negative mental mental model positivity, negativity, or mixed perceptions models of the utility and value of EPRP as a source of of feedback. Excluded sites appear in gray italics in clinical performance feedback. This organizing strategy Table 1. enabled categorization for all but two facilities, which exhibited mixed mental models in terms of sign. Nega- Relating mental models of feedback to clinical performance tive mental models focused on the quality and the con- As mentioned earlier, due to the change in sites’ per- sequences of the EPRP system and were more prevalent formance profiles in the period between site selection than the positive mental models, which depicted EPRP and data collection, we sought to identify clusters of as a means to an end. We present and describe the types sites with similar mental models to explore whether said of mental models emergent from the data in the subse- clusters differed in their clinical performance. To accom- quent sections. plish this, we first sought to identify an organizing framework for the 12 site mental models identified as Negative mental models: EPRP does not reflect actual described in the previous section. The research team care quality identified three dimensions along which the 12 facilities’ The mental model observed across the greatest number emergent mental models could be organized: sign of sites in the interviews was that EPRP was not an ac- (positive vs. negative) perceived feedback intensity curate reflection of actual quality of care (and by exten- (the degree to which respondents described receiving sion was not a good source of feedback), a theme raised more or more detailed feedback in any given in- at 6 of the 12 sites coded but displaying the highest stance) and perceived feedback consistency (the degree levels of groundedness at sites C, K, and M. This mental to which respondents described receiving feedback at model was attributable to respondent perceptions that regular intervals). the data were not accurate because of the chart review With respect to mental model sign, facility mental and sampling approach (e.g., sites C, K, and M, quoted models were classified as positive if it could be deter- below). mined or inferred from the mental model that the facil- ity perceived EPRP or other clinical performance data as I: Is there anything they could be doing to improve a useful or valuable source of feedback; they were classi- the way you all view EPRP …? fied as negative if it could be determined or inferred from the mental model that the facility did not find util- P: Um well uh yeah just being more accurate. I mean ity or value in EPRP or similar clinical performance data the fact that we can um challenge the- the- the as a feedback source. reports and that we do have a pretty decent Facility-level mental models were classified as being proportion that- that are challenged successfully. … high in perceived feedback intensity if participants re- the registries that we’re building I really think are ported receiving more frequent or more detailed per- going to be more accurate than either one because formance feedback and were classified as low in there’s a look at our entire panel of patients whether perceived feedback intensity if participants discussed the or not they showed up. … it does seem a little receipt of feedback as being particularly infrequent or backwards these days for someone to be coming not very detailed. through and looking at things by hand especially Facility mental models were classified as high in per- when it doesn’t seem like they’re being too accurate. ceived feedback consistency if participants reported re- ceiving feedback at regular, consistent intervals and low – Primary Care Chief, Site C in perceived feedback consistency if participants re- ported believing that they received feedback at irregular I mean the EPRP pulls from what I understand, I intervals. mean sometimes we sorta see that but it’s pretty Once classified along these dimensions, we conducted abstract data. It’s not really about an individual panel. a logistic regression to test whether categorization It’s, it’s based on very few patients each month and among these three levels predicted the sites’ clinical you know, now I guess it’s pulled at the end of the Hysong et al. Implementation Science (2018) 13:73 Page 7 of 11 year but in terms of really accessing that as a provider, community aspect or the social workers too. …That’s um, uh that doesn’t happen much. the way that I see EPRP. EPRP is only you try to just set up several variables that you can measure that at – Physician, Site K the end of the day will tell you, you know what, we are taking a holistic approach to this patient and we Being able to remove ignorance and make that data are achieving what we think is best in order to keep available in a user-friendly way is paramount and im- this patient as healthy as possible. portant and unfortunately from my perspective; our systems and data systems. We are one data-rich – Primary Care Chief, Site H organization. We’ve got data out the ying-yang but our ability to effectively analyze it, capture it and roll – Physician, Site L, it out in a meaningful way leaves a lot to be desired; – Facility Director, Site M Mixed mental model: EPRP is not accurate, but it helps us improve nonetheless Of note, site L exhibited an interesting mix. Like the Positive mental models: EPRP is a means to an end sites with the more negative mental models, they re- (transparency, benchmarking, and strategic alignment) ported concerns with the quality of the data being pre- We also observed an almost equally prevalent, positive sented to them. However, this site was unique in that, mental model at a different set of sites (n = 4), where in- despite the limitations of the data, respondents at this terviewees viewed the EPRP as a means to an end (im- site nonetheless use the data for benchmarking purposes proving quality of care). The specific manifestation of because, if the targets are met it is an indication to them that end varied from site to site. For example, at one site, that a certain minimum level of quality has been reached EPRP was a way to improve transparency: at the sites. We try to be totally transparent. Uh sometimes to the I’d start worrying and looking at why or what am I point of uh being so transparent people can see doing that’s causing it to be like this. Is it the way they everything … and sometimes they may not sound pull the data? Because it’s random. It’s not every single good but if you consistently do it I think you know patient that is recorded. Um, they pull, I believe people understand that. anywhere from 5 to 10 of your charts. … So I do ask that and then if it is a true accounting then I go “OK – Facility Director, Site B then it’s me. It’s gotta be me and my team.” I look at what my team is doing or what portion of that, that At another site, EPRP was perceived as a tool for stra- performance is performed by my staff and what tegic alignment: portion of it is by me. And then from there I go OK. Then I weed it out. I think the VA is- is um I think they are wise in connecting what they feel are important clinical indicators with the overall performance measurement Unintended negative consequences: EPRP is making and the performance evaluation of the director so that clinicians hypervigilant the goals can be aligned from the clinical staff to the In addition to the mental models described above, a sec- administrative staff and we’ve been very fortunate. ond, negative mental model was observed at a single site We’ve gotten a lot of support [latitude in how to best about the unintended consequences of feeding back align] here. EPRP data—clinicians reported a feeling of hypervigi- lance as a result of the increased emphasis on the per- – Primary Care Chief, Site A formance measures in the EPRP system: At other sites, EPRP was seen as a clear tool for There’s just a lot of pressure, uh you know, that, uh it benchmarking purposes: seems like leaders are under and you know, I’mina position where I try to buffer some of that so that my You benchmark what your model or your goal or providers don’t feel the pressure but when your best standards of care are, and that is basically performance measures and, um other, uh things are on the spectrum that embodies the whole patient, all being looked at, um and it comes down, uh you know, the way from the psychological aspect to the through our, our electronic medium almost instantly, Hysong et al. Implementation Science (2018) 13:73 Page 8 of 11 um it’s, uh you know, looking for responses, why did and feedback and their facility-level clinical performance. this happen, it’s very hard for people to feel, feel Interestingly, one area of consistency across all facilities comfortable about anything. is that, without prompting, mental models of EPRP re- ported by interviewees included some component of the – Primary Care Chief, Site F quality of the data. These findings are consistent with previous research noting considerable variability in individual and shared Relating mental models of feedback to clinical mental models of clinical practice guidelines [3]. Add- performance itionally, the finding that the two sites with neutral men- Table 3 presents the 12 sites, a summary of their re- tal models and by extension, low intensity of feedback spective mental models, and their classification accord- exhibited low clinical performance is consistent with ing to the three dimensions in 2008. No significant previous research indicating feedback intensity may association was observed between performance (in 2008 moderate feedback effectiveness [4]. Contrary to previ- or 2012) that can either affect intensity or consistency ous research, however, no particular mental model of when tested via logistic regression and analysis of vari- feedback was found to be associated with better clinical ance with sites as between subject factors, though we ac- performance. Two possible reasons for this include the knowledge 12 is a rather small sample size for this type conditions under which information is processed (e.g., of test. However, we did observe two noteworthy, de- fuzzy trace theory suggests affect impacts information scriptive patterns: first, none of the sites’ mental models processing and memory recall; [20, 21]), and the local were highly positive; in other words, at best, sites per- adaptation for feedback purposes of a standardized, na- ceived EPRP performance measure-based feedback to be tional program such as EPRP. a means to an end (transparency, benchmarking, or stra- Another possible reason for the lack of relationship be- tegic alignment), rather than a highly valued, integral tween clinical performance and mental models could con- best practice for delivering high-quality care. Second, cern source credibility. The strongest negative mental 75% of sites exhibited moderate or low levels of feedback model indicated providers simply did not perceive EPRP intensity. to be a credible source of clinical performance assessment. Our final analyses examined descriptively whether sites A recent model of physician acceptance of feedback [22] with positive vs. negative vs. mixed mental models ex- posits that low feedback source credibility is more likely hibited common patterns of clinical performance. to be associated with negative emotions such as apathy, ir- Among sites with negative mental models, all four clin- ritation, or resentment; however, depending on other fac- ical performance categories from 2008 were represented; tors, this emotional reaction can lead either to no action clinical performance also varied considerably among (no behavior modification) or counterproductive actions these sites in 2012, though no high performers were ob- (e.g., defending low performance rather than changing it); served among these sites in 2012. Among sites with the model does not specify what factors may compel positive mental models, the low clinical performance someone to adopt one pathway (e.g., inaction) vs. another category was the only category not represented in 2008, (e.g., action resulting in negative outcomes). Although half whereas all levels of clinical performance were repre- of the sites reported having a mental model consistent sented in 2012. Sites with mixed or neutral mental with low source credibility, our data did not capture each models exhibited mostly low performance during both site’s specific pathway from emotional reaction to behav- time periods (site L, who exhibited highly variable per- ioral outcome and impact. It is therefore possible that un- formance in 2008, was the lowest performer in the specified factors outside our data could have been the highly variable category). Of note, sites L and J were also deciding factor, thus explaining the variability in clinical the only two sites with low intensity of feedback. performance, even within sites with a strong negative mental model of EPRP as a source of feedback. Discussion This study aimed to identify mental models of audit and Implications feedback exhibited by outpatient facilities with varying Our study demonstrates the need to extend theory- and levels of clinical performance on a set of chronic and evidence-based approaches to feedback design and im- preventive care clinical performance measures. Consist- plementation to foster receptive mental models toward ent with the highly individualized cultures and modi feedback [5]. Implementing effective feedback practices operandi of individual VA facilities across the country, may elicit receptiveness to the feedback systems in place, we found high variability in facilities’ mental models of allowing healthcare organizations to reap greater gains audit and feedback, and no clear relationship between from performance management systems that cost mil- variability among these facilities’ mental models of audit lions to sustain. One noteworthy implication of this Hysong et al. Implementation Science (2018) 13:73 Page 9 of 11 Table 3 Summary of sites’ mental models and degree of positivity, intensity, and consistency Performance Site Performance Mental model summary Sign/theme Intensity Consistency category (2007–2008) category (2011–2012) High performers B Moderate Aim: EPRP feedback is communicated, Positive: transparency Medium High tracked, and improved upon in a ubiquitous, transparent, non-punitive, systematic, and consistent way. H High EPRP as a benchmark or model for the Positive: benchmarking Medium Low best standards of care for keeping the whole patient as healthy as possible. M Moderately high EPRP measures are generally OK, but Negative: EPRP not a good Medium High not sophisticated enough to be representation of quality reflect actual care quality Consistently D Moderate EPRP serves as a primary means of Positive: strategic alignment Medium Medium moderate linking the work/efforts of all facility staff to the facility’s mission: to provide the best quality of care that veterans expect and deserve K Moderate EPRP is not a real true reflection of the Negative: EPRP not a good Medium Medium quality of one’s practice because of representation of quality the sample size at a particular time period. E Moderately high Clinicians think EPRP is inferior to their Negative: EPRP not a good Medium Medium population-based, VISN created representation of quality dashboard, and leaders have concerns about overuse and misinterpretations or misuse of EPRP Highly variable A Low EPRP remains relevant as a starting point Positive: strategic alignment Medium High for setting, aligning, and monitoring clinical performance goals G Moderately low EPRP as an “outside checks and balance” Mixed Medium Medium system that validates whether or not how the facility thinks they are doing (e.g., good job) and what challenge areas they have are accurate, however, there are no real or punitive consequences to scoring low. L Low Although EPRP does not reflect actual care Mixed Low High quality, the numbers indicate that they are doing something consistently right that helps their patients Low performers F Moderate Immediate feedback is advantageous to Negative: EPRP has made us hyper- High Medium memory, but not always well received. vigilant C Low EPRP is viewed by some as an objective, Negative: EPRP not a good Medium Low unbiased measure with some sampling representation of quality limitations; by others, EPRP is viewed as inaccurate or retrospective J Low Site struggles to provide feedback; No feedback until PACT Low Medium clinicians did not receive EPRP and PMs until the PACT implementation. Note: 2012 performance categories differ from 2008 because 2012 performance forms a continuum rather than discreet categories (see Fig. 1 and main text for details) study is that positive mental models regarding feedback quality improvement initiative was implemented: VA’s hinge partially on the feedback system’s ability to deliver nationwide transition from traditional primary care clinical performance data in ways the various users can clinics to Patient Aligned Care Teams (VA’s implementa- understand quickly and act upon personally (that is, tion of the Primary Care Medical Home model) oc- feedback can be translated into an actionable plan for in- curred in 2010. This may in part account for the dividual improvement [4, 23]). The strongest and most absence of an observable relationship between mental common mental model observed was that EPRP was not models and clinical performance. Team-based care in- a good representation of care quality—if the feedback re- volves more complex coordination among clinical staff cipients question the credibility of the feedback, it is un- and involves new roles, responsibilities, and relationships likely to effectively change performance. among existing clinical personnel, which may in turn tax clinicians enrolled in the current study [24]. Further, cli- Limitations nicians and divisions of health service may require time As with all research, this study had limitations. While to adjust to their new roles, responsibilities, relation- data were being collected for this study, a seminal ships, and changes in workflow. We believe this is Hysong et al. Implementation Science (2018) 13:73 Page 10 of 11 illustrated in our data as the mean performance score Funding The research reported here was supported by the US Department of across all sampled facilities dips significantly from 2008 to Veterans Affairs Health Services Research and Development Service (grant 2012 (2008: mean = .935, 95% CI = .004; 2012: mean nos. IIR 09–095, CD2-07-0181) and partly supported by the facilities and re- = .913, 95% CI = .004; p < .001), variability in performance sources of the Houston VA HSR&D Center of Excellence (COE) (CIN 13-413). All authors’ salaries were supported in part by the Department of Veterans across measures is visibly greater for all facilities observed Affairs. (2008: mean stdev = .05, 95% CI = .004; 2012: mean stdev =.07, 95% CI =.003; p <.001), and our sites’ standings in Availability of data and materials The datasets generated and/or analyzed during the current study are not the distribution materially changed (see Fig. 1 for all of publicly available as they are individual interviews that may potentially these patterns). It is therefore possible this initiative was a identify participants and compromise confidentiality, but are available from sufficiently large system change to possibly overwhelm the corresponding author on reasonable request. any potentially detectable relationship between mental Authors’ contributions models of feedback and clinical performance. SJH conceptualized the idea, secured the grant funding, analyzed the data, Second, we were unable to use all the data we col- and is responsible for the principal manuscript writing, material edits to the lected due to the limited number of interviews at some manuscript, and material contributions to the interpretation. KS and RS contributed to the data collection, data analysis, material edits to the sites (n = 1; see data analysis section for more details); manuscript, and material contributions to the interpretation. AA contributed however, all four performance categories contained one to the data analysis and material edits to the manuscript. AH contributed to site with unusable data, thereby minimizing any risk of the material edits to the manuscript and material contributions to the interpretation. PH contributed to the data analysis, material edits to the bias toward a given performance category. manuscript, and material contributions to the interpretation. All authors read and approved the final manuscript. Conclusions Authors’ information Despite a national, standardized system of clinical per- Contributions by Drs. Paul Haidet and Ashley Hughes, currently at Penn State formance measurement and tools for receiving clinical University College of Medicine and University of Illinois College of Applied performance information, mental models of clinical per- Health Sciences, respectively, were made during their tenure at the Michael E. DeBakey VA Medical Center and Baylor College of Medicine. The views formance feedback vary widely across facilities. Cur- expressed in this article are solely those of the authors and do not rently, evidence supports a strong negative mental necessarily reflect the position or policy of the authors’ affiliate institutions, model toward feedback likely driven by source credibility the Department of Veterans Affairs, or the US government. concerns, suggesting that EPRP measures may be of lim- Ethics approval and consent to participate ited value as a source of clinical audit and feedback to Our protocol was reviewed and approved by the Internal Review Board (IRB) clinicians. Although our data are at this time inconclu- at Baylor College of Medicine (protocol approval #H-20386). sive with respect to the relationship of these mental Competing interests model differences to performance, the variability in The authors declare that they have no competing interests. mental models suggests variability in how feedback sys- tems relying on EPRP were implemented across sites; fu- Publisher’sNote ture work should concentrate on identifying Springer Nature remains neutral with regard to jurisdictional claims in implementation factors and practices (such as establish- published maps and institutional affiliations. ing the credibility of clinical performance measures in Author details the eyes of clinicians) that lead to strong, productive 1 Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA. mental models of the innovation being implemented. Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX 77030, USA. VISN 4 Center for Evaluation of PACT (CEPACT), Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA. Department of Biomedical and Endnotes Health Information Sciences, University of Illinois at Chicago, Chicago, IL, 5 6 Defined as the average of all clinical performance USA. Penn State University College of Medicine, Hershey, PA, USA. Center for Innovations in Quality Effectiveness and Safety, Houston, Texas, USA. measure scores used in the performance profile for a given facility in a given performance period [13]. Received: 17 November 2017 Accepted: 17 May 2018 Additional files References 1. America CoQoHCi. Crossing the quality chasm: a new health care system for Additional file 1: COREQ (COnsolidated criteria for REporting Qualitative the 21st century. Washington, DC: National Academies Press; 2001. research) checklist. (PDF 489 kb) 2. America CoQoHCi. Rewarding provider performance: aligning incentives in Additional file 2: Interview guide. (DOCX 26 kb) Medicare. Washington DC: National Academies Press; 2006. 3. Hysong SJ, Best RG, Pugh JA, Moore FI. Not of one mind: mental models of Additional file 3: Example site summary. (DOCX 19 kb) clinical practice guidelines in the Veterans Health Administration. Health Serv Res. 2005;40(3):829–47. Acknowledgements 4. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, We would like to thank El Johnson, Gretchen Gribble, and Thach Tran for O'Brien MA, Johansen M, Grimshaw J, Oxman AD. Audit and feedback: their previous contributions to data collection and analysis (EJ and GG for effects on professional practice and healthcare outcomes. Cochrane data collection assistance, TT for data analysis assistance) this project. Database Syst Rev. 2012;6: 1–229 Hysong et al. Implementation Science (2018) 13:73 Page 11 of 11 5. Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, Grimshaw JM. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9:14. 6. Kraiger K, Wenzel LH. Conceptual development and empirical evaluation of measures of shared mental models as indicators of team effectiveness. In: Brannick MT, Salas E, Prince E, editors. Team performance assessment and measurement: theory, methods and applications. Mahwah, NJ: Lawrence Erlbaum; 1997. p. 63–84. 7. Johnson-Laird PN, Girotto V, Legrenzi P. Mental models: a gentle guide for outsiders. Sistemi Intelligenti. 1998;9(68):33. 8. Reyna VF, Brainerd CJ. Fuzzy-trace theory and framing effects in choice: gist extraction, truncation, and conversion. J Behav Decis Mak. 1991;4(4):249–62. 9. Reyna VF. A theory of medical decision making and health: fuzzy trace theory. Med Decis Mak. 2008;28(6):850–65. 10. Cooke NJ, Gorman JC. Assessment of team cognition. In: Turner CW, Lewis JR, Nielsen J, editors. International encyclopedia fo ergonomics and human factors. Boca Raton, FL: CRC Press; 2006. p. 270–5. 11. DeChurch LA, Mesmer-Magnus JR. The cognitive underpinnings of effective teamwork: a meta-analysis. J Appl Psychol. 2010;95(1):32–53. 12. Schmidt RA, Bjork RA. New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychol Sci. 1992;3(4):207–17. 13. Hysong SJ, Teal CR, Khan MJ, Haidet P. Improving quality of care through improved audit and feedback. Implement Sci. 2012;7:45–54. 14. Hysong SJ, Best RG, Pugh JA. Audit and feedback and clinical practice guideline adherence: making feedback actionable. Implement Sci. 2006;1(1): 15. Strauss AL, Corbin J. Basics of qualitative research : techniques and procedures for developing grounded theory, 2nd edn. Thousand Oaks, CA: Sage Publications; 1998. 16. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88. 17. Weber RP. Basic content analysis, vol. 49. 2nd ed. Newbury Park: Sage; 1990. 18. Atlas.ti, vol. 2010. 6.2 ed. Berlin: Scientific Software Development. 19. Klimoski R, Mohammed S. Team mental model: construct or metaphor? J Manag. 1994;20(2):403–37. 20. Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn Individ Differ. 1995;7(1):1–75. 21. Brainerd CJ, Reyna VF. Fuzzy-trace theory and false memory. Curr Dir Psychol Sci. 2002;11(5):164–9. 22. Payne VL, Hysong SJ. Model depicting aspects of audit and feedback that impact physicians’ acceptance of clinical performance feedback. BMC Health Serv Res. 2016;16(1):260. 23. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O'Brien MA, French SD, Young J, Odgaard-Jensen J. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41. 24. Hysong SJ, Knox MK, Haidet P. Examining clinical performance feedback in patient-aligned care teams. J Gen Intern Med. 2014;29:667–74. 25. Margaret M. Byrne, Christina N. Daw, Harlan A. Nelson, Tracy H. Urech, Kenneth Pietz, Laura A. Petersen. Method to Develop Health Care Peer Groups for Quality and Financial Comparisons Across Hospitals. Health Serv Res. 2009; 44 (2p1):577-592

Journal

Implementation ScienceSpringer Journals

Published: May 30, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off