Clinic Satisfaction Tool Improves Communication and Provides Real-Time Feedback

Clinic Satisfaction Tool Improves Communication and Provides Real-Time Feedback Abstract BACKGROUND Patient-reported assessments of the clinic experience are increasingly important for improving the delivery of care. The Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey is the current standard for evaluating patients’ clinic experience, but its format gives 2-mo delayed feedback on a small proportion of patients in clinic. Furthermore, it fails to give specific actionable results on individual encounters. OBJECTIVE To develop and assess the impact of a single-page Clinic Satisfaction Tool (CST) to demonstrate real-time feedback, individualized responses, interpretable and actionable feedback, improved patient satisfaction and communication scores, increased physician buy-in, and overall feasibility. METHODS We assessed CST use for 12 mo and compared patient-reported outcomes to the year prior. We assessed all clinic encounters for patient satisfaction, all physicians for CG-CAHPS global rating, and physician communication scores, and evaluated the physician experience 1 yr after implementation. RESULTS During implementation, 14 690 patients were seen by 12 physicians, with a 96% overall CST utilization rate. Physicians considered the CST superior to CG-CAHPS in providing immediate feedback. CG-CAHPS global scores trended toward improvement and were predicted by CST satisfaction scores (P < .05). CG-CAHPS physician communication scores were also predicted by CST satisfaction scores (P < .01). High CST satisfaction scores were predicted by high utilization (P < .05). Negative feedback dropped significantly over the course of the study (P < .05). CONCLUSION The CST is a low-cost, high-yield improvement to the current method of capturing the clinic experience, improves communication and satisfaction between physicians and patients, and provides real-time feedback to physicians. Clinic satisfaction tool, Patient satisfaction, Patient experience, CG-CAHPS, Neurosurgery clinic ABBREVIATIONS ABBREVIATIONS CG-CAHPS Clinician and Group Consumer Assessment of Healthcare Providers and Systems CST Clinic Satisfaction Tool IRB institutional review board PC physician communication SQUIRE Standards for Quality Improvement Reporting Excellence Up to half of all patients leave the clinic visit with an unvoiced need.1-3 This may be due to forgetting, feeling uncomfortable or intimidated, or running out of time. These complaints are problematic because provider–patient miscommunication can lead to suboptimal patient care, including decreased improvement of symptoms in some and even patient safety issues.2,4 Conversely, while patients who make more requests are seen as demanding by their providers, they are also more likely to have their needs fulfilled and be satisfied with the outcome of their visit.5-7 There have been many attempts to evaluate the patient experience, but most, including the nationally implemented Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS), do not address this major void in outpatient care.8-10 In addition, CG-CAHPS fails to deliver timely, actionable feedback since feedback is delivered months after a visit and performance is reported as percentage scores for each closed-ended question. Its validation in a primary care setting means results are not generalizable to specialties where there is a large burden of severe, chronic, and/or surgical disease.11-14 Patients find such surveys to be burdensomely long and the results laborious to understand.15-17 In addition, physicians can find these surveys unreliable, biased, or irrelevant to their practice.18-21 For example, orthopedic surgeons and neurosurgeons score worse compared to the general population of physicians on communication, but those low scores can be dependent on factors irrelevant to the actual episode of care such as scheduling and parking.21,22 There have been attempts to design better questionnaires on the inpatient side, but a recent study using an outpatient questionnaire was limited by the same low response rates of CG-CAHPS.10,23-25 Because of the failures of current methods, our team, with input from our nursing staff, nurse manager, and other providers in our clinic, designed the Clinic Satisfaction Tool (CST). We used an open-ended, minimalist approach to questionnaire design in order to seamlessly integrate the CST into each visit. We then implemented the CST in a multidisciplinary spine clinic and evaluated performance over 12 mo. The specific aims of the CST were to demonstrate real-time feedback, individualized responses, interpretable and actionable feedback, improved patient satisfaction and communication scores, increased physician buy-in, and overall feasibility. METHODS We received institutional review board (IRB) approval to perform a pre–post-intervention quality improvement study which included all physicians in the spine clinic who had a full year of CG-CAHPS baseline scores. Beginning January 2016, the CST was integrated into the daily clinic workflow and every patient was provided the CST. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence) guidelines were used for reporting our findings.26 Inclusion Criteria Physicians were included in the analysis if they were in: neurosurgery, orthopedic surgery, or physiatry; outpatient practice at the Durham, NC location of the Duke Spine Center from January 2015 to December 2016; and active practice for the majority of the 2 yr observed. Six neurosurgeons, 3 orthopedic surgeons, and 3 physiatrists were included for a total of 12 physicians (Table 1). Our physician cohort was mostly male, almost all fellowship-trained, and contained a wide range in years of practice, 3 to 35 yr. TABLE 1. Physician Demographics (n = 12) Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  View Large TABLE 1. Physician Demographics (n = 12) Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  View Large All patients who presented for a visit within the Spine Center received a CST. Intervention The CST (Figure 1) is a patient-centered questionnaire assessing each patient's perception of provider communication and satisfaction with care. The CST also aims to give providers measurable and actionable feedback that they can incorporate into their daily practice. FIGURE 1. View largeDownload slide Clinic Satisfaction Tool. FIGURE 1. View largeDownload slide Clinic Satisfaction Tool. The CST is completed by every patient at each appointment as shown in Figure 2. While waiting for their provider, patients list their chief complaints in the top half of the form. The physician reviews the CST with the patient to ensure that all concerns are addressed. FIGURE 2. View largeDownload slide Clinic workflow with immediate feedback. FIGURE 2. View largeDownload slide Clinic workflow with immediate feedback. After the provider leaves, the patient completes the feedback section of the CST. The form is then collected by the nurse and returned to the physician immediately. If a patient responds “No” to the question “Were you satisfied with your visit?” or “Were all of your questions addressed today?” a rescue process is initiated and physicians return to the room or follow-up with the patient by phone within 24 h. This ensures that a real-time solution is reached expediently for any problems. The feedback from the CST is intended to lend insight into areas of improvement for both providers and the overall clinic. On a weekly and monthly basis, written comments from the CST are reported to each provider in the clinic. These reports included intervention utilization scores, Yes/No scores on each question, and written patient feedback. Data Collected For each physician, monthly CG-CAHPS reports from January 2015 to December 2016 were collected, including global rating and physician communication (PC) subscores. The CST was implemented in the Duke Spine Center in January 2016. The following information was collected from every patient: chief complaints; how their visit could be improved; satisfaction with visit (yes/no); and all questions answered (yes/no). The patient volume, the number of CSTs returned, the proportion of CSTs which were marked yes for satisfaction and questions answered, the number of comments, and the types of comments (positive, negative, not applicable) were calculated for each physician on a monthly basis. Implementation Assessment Eleven of the physicians included in the study (excluding the senior author) were emailed a Qualtrics survey asking them to consider the experiences of 3 stakeholders—themselves, patients, and nursing—to give a comprehensive evaluation the CST. The survey was sent 18 mo into implementation of the CST to assess its feasibility in clinic, its utility as a feedback tool, and provider and nursing satisfaction. Statistical Analysis We collected 12 mo of longitudinal data from 12 providers within 3 departments. Therefore, we used the complex linear mixed model to account for interprovider and interdepartmental variations; equal variance was assumed within departments. Kruskal-Wallis testing was used for individual categorical variables due to skewed outcome measure distributions, and univariate mixed model regression for continuous data. Mixed model multivariate regression was then performed for 3 outcome measures: CG-CAHPS global score; CG-CAHPS PC score CST satisfaction “Yes” score (proportion of patients who checked yes to satisfaction and/or questions answered out of those who completed the feedback section). For determination of significance, α was set to 0.05. Datasets were unified in Microsoft Excel (Microsoft Inc, Redmond, Washington), and all analyses performed with RStudio (RStudio Inc, Boston, Massachusetts). RESULTS CST Utilization During the first 12 mo of CST implementation, 14 690 patients were seen, of whom 95.6% returned the CST. Monthly utilization of the form by providers increased from below 85% to above 95% by the third month of the study and remained high throughout. This utilization rate was concordant with the physicians’ perception of patient usage (Tables, Supplemental Digital Content 1 and 2). The highest overall patient volume belonged to physiatry (54%), with the rest divided between neurosurgery (23%) and orthopedics (23%). Satisfaction Results CG-CAHPS Scores Over the 12 mo of the study, there was a trend toward improvement in the mean CG-CAHPS global score for all providers but these trends were not statistically significant (Figure 3). There was no significant change in CG-CAHPS PC scores for our cohort. FIGURE 3. View largeDownload slide CG-CAHPS global score, all providers, Jan 2015 to Jan 2016. Global score is given as a proportion. The CST was implemented beginning in January 2016. Overall R2 = 0.0008, P = .74 FIGURE 3. View largeDownload slide CG-CAHPS global score, all providers, Jan 2015 to Jan 2016. Global score is given as a proportion. The CST was implemented beginning in January 2016. Overall R2 = 0.0008, P = .74 CST Scores During the first 12 mo of CST collection, average provider satisfaction ratings were above 97% at all times. CST satisfaction scores demonstrated a clear trend toward improvement over time (P = .06). Among neurosurgeons, average ratings remained above 96% at all times. There was a decreasing trend in the percentage of negative comments received by all providers over the course of the study (Figure 4). Follow-up investigation of the rise in the last quarter revealed outlier results from 1 physician with a known external cause, and excluding that individual revealed a much stronger trend (P = .05). FIGURE 4. View largeDownload slide Negative comments on CST decrease over time. A single provider contributed the majority of negative comments in the last 3 mo of the study secondary to personal reasons. There was a decreasing trend in negative feedback throughout the study that is strengthened without the outlier provider. Red = adjusted, teal = original comment score. Original: R2 = 0.03, P = .57. Adjusted: R2 = 0.33, P = .05. FIGURE 4. View largeDownload slide Negative comments on CST decrease over time. A single provider contributed the majority of negative comments in the last 3 mo of the study secondary to personal reasons. There was a decreasing trend in negative feedback throughout the study that is strengthened without the outlier provider. Red = adjusted, teal = original comment score. Original: R2 = 0.03, P = .57. Adjusted: R2 = 0.33, P = .05. Predictors of Patient Satisfaction CG-CAHPS Global Predictors Each patient-reported outcome was tested across the 12-mo period for predictors among department, month, patient volume, CST utilization, and CST satisfaction score (Table 2). Neurosurgery had significantly higher baseline global scores compared to orthopedics and physiatry (P = .001). In univariate analysis, CG-CAHPS global rating was significantly predicted by CST satisfaction over our study period. Patient volume had a negative relationship with global ratings, with a drop of 0.5% per 20 additional patients per month, but this relationship was not significant and our data were not powered to detect a small effect at the 0.05 significance level. Twenty patients are roughly 1 additional clinic day. Conversely, CST satisfaction was a positive predictor of CG-CAHPS global rating (P = .03). In the multivariate model, global scores were best predicted by the effects of CST satisfaction (P = .03) after accounting for provider and department. Comment type was removed from the regression due to low comment counts. TABLE 2. Overall Predictors of Patient Satisfaction Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    aPost-hoc testing showed Neurosurgery received the highest average rating, followed by Orthopedics, then Physiatry. Department was used in the mixed model as a grouping variable, not as a predictor. bVariables with a univariate P-value < .20 were not included in the multivariate model. View Large TABLE 2. Overall Predictors of Patient Satisfaction Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    aPost-hoc testing showed Neurosurgery received the highest average rating, followed by Orthopedics, then Physiatry. Department was used in the mixed model as a grouping variable, not as a predictor. bVariables with a univariate P-value < .20 were not included in the multivariate model. View Large CG-CAHPS PC Predictors In univariate analysis, CST satisfaction was positively correlated with the CG-CAHPS PC rating over our study period (P = .01); however, there was no significant difference between departments. On multivariate analysis, CST satisfaction remained a significant predictor of PC scores (P = .009). CST Satisfaction Predictors Neurosurgery scores exceeded other departments (P = .04). In univariate analysis, high CST utilization by the provider was associated with higher CST scores (P = .02). On multivariate analysis, utilization trended toward a positive predictor of CST satisfaction (P = .05). Implementation Assessment Survey Results Workflow and Communication Nine of the 11 physicians (82%) asked responded to our survey (Tables, Supplemental Digital Content 1-3) Overall, the providers included in the study used the CST in the manner intended: it was given to almost every patient, reviewed with the patient by the physician while in the room, and was considered to be helpful by physicians for communicating with patients. Physicians also felt that patients generally liked the tool and were able to communicate better with the CST, and that nurses were helpful in encouraging patients to leave feedback on the CST. CST Utility In the last section of the survey, physicians were asked to discuss their personal implementation of the CST and its utility. Seventy-five percent of physicians regularly used the individual CSTs as a source of patient feedback. However, only 38% used the monthly emailed CST report on a regular basis, with another 38% using it sometimes. In comparison, 88% of physicians used the monthly CG-CAHPS reports as a source of patient feedback. Physicians were then asked to compare the reports in utility: for half of the physicians, the CST and CG-CAHPS reports were equally useful. Three-quarters of the remainder found the CST to be more useful than CG-CAHPS, and one individual did not use either as a source of feedback. The reasons given and constructive feedback are listed in Table 3. TABLE 3. Physician Feedback Results   A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”      A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”    View Large TABLE 3. Physician Feedback Results   A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”      A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”    View Large DISCUSSION This single-center, multidisciplinary pre–post study encompassing 12 physicians and 14 690 patients demonstrated that a single-page questionnaire can be easily implemented with almost perfect utilization using existing clinic infrastructure. Physicians felt that the CST was useful and provided individualized, actionable, and real-time feedback. CST scores trended with CG-CAHPS global and PC scores. CST global but not PC scores improved modestly over the course of the study. The highest CST performers were those who utilized the tool. CST Addresses CG-CAHPS Limitations The CST was designed to challenge specific problems with CG-CAHPS (Table 4). First, the shortest version of CG-CAHPS is 4 pages long, but could be substantially shortened without a significant loss of reliability.17,27 While ours is not the first program to develop a shorter method for collecting patient feedback, we are the first to demonstrate robust feedback return rates to 96%.10,24,28–30 Despite CG-CAHPS’s length, it fails to capture open-ended patient data.31 The long length contributes to poor return: at our institution, only 10% of patients respond to the global question, while other institutions report response rates between 9% and 58%.24,32-34 This low return risks nonresponse bias and challenges the validity of CAHPS results.35 TABLE 4. CST Design Features     CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content      CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content  View Large TABLE 4. CST Design Features     CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content      CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content  View Large Secondly, we returned to individualized feedback as aggregated reports eliminate any association of results with particular encounters. Aggregates are useful for studying trends, but blinding patient-level data leaves providers unable to interpret comments in the context of patient characteristics. Predictors of decreased satisfaction include lower age, less education, higher pain/disability scores, not speaking English, and uninsured or Medicaid payer status.20,22,36-43 Patients also prefer affective behavior and the spontaneous sharing of information from their providers, as well as providers who match their vocabulary.44,45 For neurosurgical and spine patients, satisfaction is negatively associated with nonsurgical treatment plans, due to either a lack of patient understanding or inadequate communication from the surgeon.20,36 When neurosurgeons clearly address the expected outcomes of a plan with a patient as facilitated by the CST, patients are more likely to have an optimal outcome and be satisfied with their care experience independent of their actual surgical outcomes.12,46,47 By having each feedback form available for review during the visit, physicians can address individual needs, contextualize CST feedback, and adjust their behavior for each patient. Thirdly, we addressed low buy-in to CG-CAHPS from both providers and patients. Barriers to feedback initiatives include poor team communication, a lack of buy-in from stakeholders, and difficult or cumbersome implementation.48-50 Initiatives with a low response rate like CG-CAHPS are felt to not be representative, and often it are not detailed enough to be actionable.35,51 CG-CAHPS has been cited as a poor metric in its current incarnation for the evaluation of physician performance and care quality.11,12,18 Even patients prefer to use other, simpler satisfaction measures to make decisions about choosing a physician.15,16 Furthermore, the PC subscore of CG-CAHPS fails to capture dimensions of the encounter that patients felt were important.52 Our intervention avoided these issues because the minimalist, single page format was easy for both patients and providers to use. This resulted in high return rates of 96%, eliminating concern for nonresponse bias. Additionally, the open-ended questions made the form relevant for all 3 specialties and the monthly CST reports were rated as more useful than CG-CAHPS reports. Lastly, we returned the temporal association between visits and results. Survey dissemination and feedback lags by 2 to 3 months for CG-CAHPS.53 Real-time feedback can be especially helpful and is often preferred where available, as providers can associate feedback with particular behaviors and encounters.24,28,54 For any feedback to be useful, there must be a pathway to improve.18 After barriers are minimized, feedback is useful for improving the patient clinic experience, especially for providers whose scores are low.34,55,56 The CST provides instant, real-time feedback after each visit, and patients can leave constructive comments associated with the visit. CST Captures Some Parts of CG-CAHPS A central goal in the design of CST was to assess the clinic experience while minimizing impact for staff and patients. The global rating for CG-CAHPS v3.0 is based solely on question 18 which asks patients to rate their provider on a 10-point scale. The PC score is based on 6 multiple choice questions with responses from never to always, which are as follows: knowing the patient's medical history; clear explanations; easy-to-understand information; careful listening; respect for patients’ words; and spending enough time.9 Despite being based on only 2 categories, CST satisfaction was a predictor of both global and PC scores. There is face validity in the relationship between visit satisfaction on the CST and overall impression of the provider. The global score question asks only: “Using any number from 0 to 10, where 0 is the worst provider possible and 10 is the best provider possible, what number would you use to rate this provider?” and credit for our metrics is only given for the top box response, a 10. The connection between CST satisfaction and communication is more complex, however. By soliciting chief complaints and providing a tangible copy for patients and providers to reference during a visit, the CST addressed questions 4 and 5. Even so, there was significant correlation between the 2 measures and the CST was equally or preferentially regarded by the physicians in this study due to the immediateness of feedback. This successfully accomplished one of the initial goals of our intervention. Notably, CST satisfaction polled much higher than CG-CAHPS scores at every time point. Due to differences in the timing of survey administration, the CST is better positioned than CG-CAHPS to assess the patient's visit experience. This is concordant with the current understanding that, compared to mail surveys, point-of-care surveys generate higher response rates and higher satisfaction scores through reduction of both recall bias and nonresponse bias.35,57 Limitations In our experience, the utility of the CST is most limited by the different levels of physician investment in the implementation process. Results are more robust for providers who are willing to discuss the CST with each of their patients and encourage patients to complete the form. Additionally, we noted that reduced utilization coincided with the hiring of new certified medical assistants, which poses a problem in clinics with high turnover. The distributions of the outcome variables are highly skewed, limiting the sensitivity of the regression analysis. Due to the small number of providers, our analysis is susceptible to outlier results as noted in the last 3 mo of the study and is limited in drawing conclusions on Orthopedics and Physiatry with 3 physicians each. This also means that we are underpowered to detect less dramatic relationships. Centers for Medicare and Medicaid Services policies preclude any patient surveys after the end of the visit, so delayed retesting for reliability was not possible. Finally, the PC section of the CG-CAHPS captures 4 categories, 2 of which are not addressed by the CST. Future Directions Based on the success of the CST within the spine clinic, we have expanded the CST to our brain tumor clinic, 2 other neurosurgical clinics, and outside of neurosurgery and spine as well. These collaborations have been very successful, without complaints on implementation from the new locations. This correlates with our initial findings that the CST presents a low burden to clinic staff and patients. Due to the open-ended nature of the assessment, there is little personalization required to implement the CST in other specialties and we have kept the CST free for research and clinical use. The only barriers are the training of clinic staff and the salary of an assistant to transcribe and aggregate written patient-reported data. By initiating the CST in an electronic format, a practice can minimize that cost. The ultimate role of this study is to serve as a successful example of a shorter, more useful, and well-received visit assessment. Practice groups can use the CST as a supplement and eventual replacement for CG-CAHPS. As patient assessment continuously evolves, assessment tools should be equally patient and provider friendly. CONCLUSION This yearlong implementation study demonstrated that the CST is a simple, short, and effective intervention for capturing the patient experience. The CST was able to provide real-time and interpretable feedback, individualized responses, modest improvement in satisfaction and communication scores, and substantial physician engagement. There are low barriers to expansion to new practices. We hope the CST inspires a move toward more straightforward, open-ended assessments of the patient experience that allow patients to truly communicate their needs to providers. Disclosures Research reported in this publication was supported by the National Center For Advancing Translational Sciences of the National Institutes of Health under Award Number TL1TR001116. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Duke Medicine IRB# Pro00081925 – Exempted Protocol. Dr Gottfried is a medical consultant for Pioneer Surgical Technology, Inc. The other authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article. REFERENCES 1. Barry CA, Bradley CP, Britten N, Stevenson FA, Barber N. Patients' unvoiced agendas in general practice consultations: qualitative study. BMJ . 2000; 320( 7244): 1246- 1250. Google Scholar CrossRef Search ADS PubMed  2. Bell RA, Kravitz RL, Thom D, Krupat E, Azari R. Unsaid but not forgotten: patients’ unvoiced desires in office visits. Arch Intern Med . 2001; 161( 16): 1977- 1984. Google Scholar CrossRef Search ADS PubMed  3. Low LL, Sondi S, Azman AB et al.   Extent and determinants of patients' unvoiced needs. Asia Pac J Public Health . 2011; 23( 5): 690- 702. Google Scholar CrossRef Search ADS PubMed  4. Jensen IB, Bodin L, Ljungqvist T, Gunnar Bergström K, Nygren A. Assessing the needs of patients in pain: a matter of opinion? Spine . 2000; 25( 21): 2816- 2823. Google Scholar CrossRef Search ADS PubMed  5. Bowling A, Rowe G, Lambert N et al.   The measurement of patients’ expectations for health care: a review and psychometric testing of a measure of patients’ expectations. Health Technol Assess . 2012; 16( 30): i- xii, 1-509. Google Scholar CrossRef Search ADS PubMed  6. Bowling A, Rowe G, McKee M. Patients' experiences of their healthcare in relation to their expectations and satisfaction: a population survey. J R Soc Med . 2013; 106( 4): 143- 149. Google Scholar CrossRef Search ADS PubMed  7. Kravitz RL, Bell RA, Azari R, Kelly-Reif S, Krupat E, Thom DH. Direct observation of requests for clinical services in office practice: what do patients want and do they get it? Arch Intern Med . 2003; 163( 14): 1673- 1681. Google Scholar CrossRef Search ADS PubMed  8. Barr DA, Vergun P. Using a new method of gathering patient satisfaction data to assess the effects of organizational factors on primary care quality. Jt Comm J Qual Improv . 2000; 26( 12): 713- 723. Google Scholar PubMed  9. Clinician & Group. Available at: https://www.ahrq.gov/cahps/surveys-guidance/cg/index.html. Accessed August 31, 2017. 10. Ferranti DE, Makoul G, Forth VE, Rauworth J, Lee J, Williams MV. Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med . 2010; 5( 9): 522- 527. Google Scholar CrossRef Search ADS PubMed  11. Glickman SW, Schulman KA. The mis-measure of physician performance. Am J Manag Care . 2013; 19( 10): 782- 785. Google Scholar PubMed  12. Godil SS, Parker SL, Zuckerman SL et al.   Determining the quality and effectiveness of surgical spine care: patient satisfaction is not a valid proxy. Spine J . 2013; 13( 9): 1006- 1012. Google Scholar CrossRef Search ADS PubMed  13. Hays RD, Chong K, Brown J, Spritzer KL, Horne K. Patient reports and ratings of individual physicians: an evaluation of the DoctorGuide and Consumer Assessment of Health Plans Study provider-level surveys. Am J Med Qual . 2003; 18( 5): 190- 196. Google Scholar CrossRef Search ADS PubMed  14. Solomon LS, Hays RD, Zaslavsky AM, Ding L, Cleary PD. Psychometric properties of a group-level Consumer Assessment of Health Plans Study (CAHPS) instrument. Med Care . 2005; 43( 1): 53- 60. Google Scholar PubMed  15. Kanouse DE, Schlesinger M, Shaller D, Martino SC, Rybowski L. How patient comments affect consumers' use of physician performance measures. Med Care . 2016; 54( 1): 24- 31. Google Scholar CrossRef Search ADS PubMed  16. Schlesinger M, Kanouse DE, Rybowski L, Martino SC, Shaller D. Consumer response to patient experience measures in complex information environments. Med Care . 2012; 50 Suppl: S56- S64. Google Scholar CrossRef Search ADS PubMed  17. Stucky BD, Hays RD, Edelen MO, Gurvey J, Brown JA. Possibilities for shortening the CAHPS clinician and group survey. Med Care . 2016; 54( 1): 32- 37. Google Scholar CrossRef Search ADS PubMed  18. Boiko O, Campbell JL, Elmore N, Davey AF, Roland M, Burt J. The role of patient experience surveys in quality assurance and improvement: a focus group study in English general practice. Health Expect . 2015; 18( 6): 1982- 1994. Google Scholar CrossRef Search ADS PubMed  19. Conrad F, Kreuter F. Memory and Recall: Length of Reference Period - University of Michigan . Available at: https://www.coursera.org/learn/questionnaire-design/lecture/P2AwF/memory-and-recall-length-of-reference-period. Accessed November 18, 2017. 20. Mazur MD, McEvoy S, Schmidt MH, Bisson EF. High self-assessment of disability and the surgeon's recommendation against surgical intervention may negatively impact satisfaction scores in patients with spinal disorders. J Neurosurg Spine . 2015; 22( 6): 666- 671. Google Scholar CrossRef Search ADS PubMed  21. Quigley DD, Elliott MN, Farley DO, Burkhart Q, Skootsky SA, Hays RD. Specialties differ in which aspects of doctor communication predict overall physician ratings. J Gen Intern Med . 2014; 29( 3): 447- 454. Google Scholar CrossRef Search ADS PubMed  22. Bible JE, Shau DN, Kay HF, Cheng JS, Aaronson OS, Devin CJ. Are low patient satisfaction scores always due to the provider? determinants of patient satisfaction scores during spine clinic visits. Spine . 2016. doi:10.1097/BRS.0000000000001453 [published online ahead of print]. 23. Indovina K, Keniston A, Reid M et al.   Real-time patient experience surveys of hospitalized medical patients. J Hosp Med . 2016; 11( 4): 251- 256. Google Scholar CrossRef Search ADS PubMed  24. Pena SM, Lawrence N. Analysis of wait times and impact of real-time surveys on patient satisfaction. Dermatol Surg . 2017; 43( 10): 1288- 1291. Google Scholar CrossRef Search ADS PubMed  25. Torok H, Ghazarian SR, Kotwal S, Landis R, Wright S, Howell E. Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med . 2014; 9( 9): 553- 558. Google Scholar CrossRef Search ADS PubMed  26. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf . 2016; 25( 12): 986- 992. Google Scholar CrossRef Search ADS PubMed  27. Drake KM, Hargraves JL, Lloyd S, Gallagher PM, Cleary PD. The effect of response scale, administration mode, and format on responses to the CAHPS Clinician and Group survey. Health Serv Res . 2014; 49( 4): 1387- 1399. Google Scholar CrossRef Search ADS PubMed  28. Alemi F, Jasper H. An alternative to satisfaction surveys: let the patients talk. Qual Manag Health Care . 2014; 23( 1): 10- 19. Google Scholar CrossRef Search ADS PubMed  29. Makoul G, Krupat E, Chang C-H. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns . 2007; 67( 3): 333- 342. Google Scholar CrossRef Search ADS PubMed  30. Biglino G, Koniordou D, Gasparini M et al.   Piloting the use of patient-specific cardiac models as a novel tool to facilitate communication during cinical consultations. Pediatr Cardiol . 2017; 38( 4): 813- 818. Google Scholar CrossRef Search ADS PubMed  31. Martino SC, Shaller D, Schlesinger M et al.   CAHPS and comments: how closed-ended survey questions and narrative accounts interact in the assessment of patient experience. J Patient Exp . 2017; 4( 1): 37- 45. Google Scholar CrossRef Search ADS PubMed  32. Bergeson SC, Gray J, Ehrmantraut LA, Laibson T, Hays RD. Comparing web-based with mail survey administration of the consumer assessment of healthcare providers and systems (CAHPS(®)) Clinician and Group Survey. Prim Health Care Open Access . 2013; 3. doi:10.4172/2167-1079.1000132. 33. Anastario MP, Rodriguez HP, Gallagher PM et al.   A randomized trial comparing mail versus in-office distribution of the CAHPS Clinician and Group Survey. Health Serv Res . 2010; 45( 5 Pt 1): 1345- 1359. Google Scholar CrossRef Search ADS PubMed  34. Riskind P, Fossey L, Brill K. Why measure patient satisfaction? J Med Pract Manage . 2011; 26( 4): 217- 220. Google Scholar PubMed  35. Perneger TV, Chamot E, Bovier PA. Nonresponse bias in a survey of patient perceptions of hospital care. Med Care . 2005; 43( 4): 374- 380. Google Scholar CrossRef Search ADS PubMed  36. Franz EW, Bentley JN, Yee PPS et al.   Patient misconceptions concerning lumbar spondylosis diagnosis and treatment. J Neurosurg Spine . 2015; 22( 5): 496- 502. Google Scholar CrossRef Search ADS PubMed  37. Chotai S, Sivaganesan A, Parker SL, McGirt MJ, Devin CJ. Patient-specific factors associated with dissatisfaction after elective surgery for degenerative spine diseases. Neurosurgery . 2015; 77( 2): 157- 163; discussion 163. Google Scholar CrossRef Search ADS PubMed  38. Murray-García JL, Selby JV, Schmittdiel J, Grumbach K, Quesenberry CP. Racial and ethnic differences in a patient survey: patients’ values, ratings, and reports regarding physician primary care performance in a large health maintenance organization. Med Care . 2000; 38( 3): 300- 310. Google Scholar CrossRef Search ADS PubMed  39. Hasnain M, Schwartz A, Girotti J, Bixby A, Rivera L UIC Experiences of Care Project Group. Differences in patient-reported experiences of care by race and acculturation status. J Immigrant Minority Health . 2013; 15( 3): 517- 524. Google Scholar CrossRef Search ADS   40. Rodriguez HP, Crane PK. Examining multiple sources of differential item functioning on the Clinician & Group CAHPS® survey. Health Serv Res . 2011; 46( 6 pt 1): 1778- 1802. Google Scholar CrossRef Search ADS PubMed  41. Menendez ME, Loeffler M, Ring D. Patient satisfaction in an outpatient hand surgery office: a comparison of English- and Spanish-speaking patients. Qual Manag Health Care . 2015; 24( 4): 183- 189. Google Scholar CrossRef Search ADS PubMed  42. Burt J, Abel G, Elmore N et al.   Understanding negative feedback from South Asian patients: an experimental vignette study. BMJ Open . 2016; 6( 9): e011256. Google Scholar CrossRef Search ADS PubMed  43. Mead N, Roland M. Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices. BMJ . 2009; 339( 3): b3450- b3450. Google Scholar CrossRef Search ADS PubMed  44. Leckie J, Bull R, Vrij A. The development of a scale to discover outpatients' perceptions of the relative desirability of different elements of doctors' communication behaviours. Patient Educ Couns . 2006; 64( 1-3): 69- 77. Google Scholar CrossRef Search ADS PubMed  45. Williams N, Ogden J. The impact of matching the patient's vocabulary: a randomized control trial. Fam Pract . 2004; 21( 6): 630- 635. Google Scholar CrossRef Search ADS PubMed  46. Soroceanu A, Ching A, Abdu W, McGuire K. Relationship between preoperative expectations, satisfaction, and functional outcomes in patients undergoing lumbar and cervical spine surgery. Spine . 2012; 37( 2): E103- E108. Google Scholar CrossRef Search ADS PubMed  47. Levin JM, Winkelman RD, Smith GA et al.   The association between the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey and real-world clinical outcomes in lumbar spine surgery. Spine J . 2017; 17( 11): 1586- 1593. Google Scholar CrossRef Search ADS PubMed  48. Käsbauer S, Cooper R, Kelly L, King J. Barriers and facilitators of a near real-time feedback approach for measuring patient experiences of hospital care. Health Policy Technol . 2017; 6( 1): 51- 58. Google Scholar CrossRef Search ADS PubMed  49. Carter M, Davey A, Wright C et al.   Capturing patient experience: a qualitative study of implementing real-time feedback in primary care. Br J Gen Pract . 2016; 66( 652): e786- e793. Google Scholar CrossRef Search ADS PubMed  50. Burt J, Campbell J, Abel G et al.   Improving Patient Experience in Primary Care: A Multimethod Programme of Research on the Measurement and Improvement of Patient Experience . Southampton (UK): NIHR Journals Library; 2017. Available at: http://www.ncbi.nlm.nih.gov/books/NBK436541/. Accessed August 30, 2017. 51. Asprey A, Campbell JL, Newbould J et al.   Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. Br J Gen Pract . 2013; 63( 608): 200- 208. Google Scholar CrossRef Search ADS   52. Quigley DD, Martino SC, Brown JA, Hays RD. Evaluating the content of the communication items in the CAHPS(®) clinician and group survey and supplemental items with what high-performing physicians say they do. Patient . 2013; 6( 3): 169- 177. Google Scholar CrossRef Search ADS PubMed  53. Castle NG, Brown J, Hepner KA, Hays RD. Review of the literature on survey instruments used to collect data on hospital patients' perceptions of care. Health Serv Res . 2005; 40( 6 pt 2): 1996- 2017. Google Scholar CrossRef Search ADS PubMed  54. Kneebone R, Nestel D, Yadollahi F et al.   Assessing procedural skills in context: exploring the feasibility of an integrated procedural performance instrument (IPPI). Med Educ . 2006; 40( 11): 1105- 1114. Google Scholar CrossRef Search ADS PubMed  55. Ivers N, Jamtvedt G, Flottorp S et al.   Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev . 2012;( 6): CD000259, DOI: 10.1002/14651858.CD000259.pub3. 56. Hurst D. Audit and feedback had small but potentially important improvements in professional practice. Evid Based Dent . 2013; 14( 1): 8- 9. Google Scholar CrossRef Search ADS PubMed  57. Gribble RK, Haupt C. Quantitative and qualitative differences between handout and mailed patient satisfaction surveys. Med Care . 2005; 43( 3): 276- 281. Google Scholar CrossRef Search ADS PubMed  Supplemental digital content is available for this article at www.neurosurgery-online.com. Supplemental Digital Content 1. Table. Physician Self-Assessment. Supplemental Digital Content 2. Table. Physician Assessment of Patients. Supplemental Digital Content 3. Table. Physician Assessment of Nursing. COMMENTS The authors report a process improvement effort designed to improve patient satisfaction in the outpatient portion of their practice. The have referenced the SQUIRE guidelines for reporting such efforts and the report is reasonably complete. Sufficient information is provided to allow the reader to replicate the intervention. The motivation for this exercise arose from some of the deficiencies of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) surveys used by the Center for Medicare Services and a number of other organizations to assess patient satisfaction. Lack of real-time reporting, basing results on small samples and not providing results that would allow interventions to correct problems for individual patients are identified by the authors and their intervention is specifically designed to address these issues. The effectiveness of the authors' implementation is impressive. Patient satisfaction as measured by the author's instrument improved over the period of its use. This intervention demonstrates the possibility that relatively simple, locally designed and implemented patient satisfaction surveys can be effective. However, perhaps more importantly, there is a problem in principle with statistical analysis of this type of data collected in a process improvement exercise. This type of analysis assumes that the data are a representative sample of a larger population to which the results are to be applied. The authors' impressive implementation indicates that they have included almost every patient in their practice. Therefore, their results are what they are and there is no need for statistical inference. If, in the future, they make changes in their use of the CST and track the results with similar completeness, they will know the effect on their patient population, again without the need for any statistical inference. A fundamental aspect of process improvement interventions is that they are local exercises conducted without intent to produce generalizable knowledge. The authors may wish to dissect their data to determine if there are any specific factors that are most important in achieving their objectives, but the structure and conduct of process improvement exercises implies that those findings are not intended to be applicable in other institutions. It is therefore unclear that the detailed statistical analysis should be published. Process improvement interventions are very important to optimizing clinical care. They must be encouraged. They do not ordinarily require approval by an ethics review body (Institutional Review Board or similar entity) because they are not considered research. However, the distinctive difference between a well documented process improvement intervention and a clinical research project is the intent to create generalizable knowledge. Where that intent exists, patients become subjects of research, ethical review and informed consent are required. This creates a conflict, because successful process improvement interventions in one institution might improve outcomes in another. Therefore, there is a substantial rationale for publication of such efforts. Some would then say that this moves the process improvement project to the category of clinical research, with a much more complicated set of requirements for the conduct and reporting of the results. Part of the confusion comes from the fact that much of the older neurosurgical literature consists essentially of informal process improvement projects (single institution case series documenting changes in practice related to changes in outcome) which have been considered and presented as “clinical research” because a clear differentiation of these 2 categories had not been made. A possible resolution of this conflict is to recognize the limitations in generalizability of process improvement projects and derive from their publication hypotheses about generalizable interventions that then must be tested in a formal clinical research study. Therefore, publications of process improvement interventions should focus on the details of implementation and the local results achieved but avoid analyses that suggest that generalizable knowledge has been produced. Taking these issues into account, I think this is a useful report of a process improvement project. The authors refer to the SQUIRE (Standards for Quality Improvement Reporting Excellence) and do a good job of satisfying them. However, the stray across the boundary into aspects of clinical research when they begin to apply tests of statistical inference (manifested as P values) to their observations about predictors of patient satisfaction and create multivariate regression models. If their intent is to create generalizable knowledge about patient satisfaction, this study should be viewed as a prospective cohort study and judged by the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines. For this purpose, the report falls short. There is no prespecified hypothesis to be tested, a more detailed description of their patient population would be required (to allow inference of applicability to other populations), deeper analyses of various potential confounding variables, limitations of the study and constraints on the generalization of results should be given. As a result, the conclusions should simply be that in this practice all but one of the neurosurgeons had a good experience with this immediate patient feedback method and suggest that other neurosurgeons may want to try it. Stephen J. Haines Youssef Hamade Minneapolis, Minnesota The authors present an instrument to gauge patient satisfaction with outpatient encounters. The questionnaire is short, inexpensive, readily integrated into clinic activities and provides real-time feedback. They have also validated it against more comprehensive questionnaires. Furthermore, the authors have tested the questionnaire for outpatient spinal patients in their home institution and documented ease of use and a high completion rate. The open-ended questions invite useful suggestions to improve patient care. Our hospitals and insurance companies are increasingly demanding reports of patient satisfaction with our care. CST seems to address those demands admirably. What is best is that its use promotes follow-up with potentially-unsatisfied patients and thus the opportunity to respond to concerns about the clinical encounter. Finally, there is the suggestion that its use is associated with improved provider performance and improved patient satisfaction over time. Sherman C. Stein Philadelphia, Pennsylvania Copyright © 2018 by the Congress of Neurological Surgeons http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Neurosurgery Oxford University Press

Clinic Satisfaction Tool Improves Communication and Provides Real-Time Feedback

Loading next page...
 
/lp/ou_press/clinic-satisfaction-tool-improves-communication-and-provides-real-time-jaispBc3q0
Publisher
Congress of Neurological Surgeons
Copyright
Copyright © 2018 by the Congress of Neurological Surgeons
ISSN
0148-396X
eISSN
1524-4040
D.O.I.
10.1093/neuros/nyy137
Publisher site
See Article on Publisher Site

Abstract

Abstract BACKGROUND Patient-reported assessments of the clinic experience are increasingly important for improving the delivery of care. The Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey is the current standard for evaluating patients’ clinic experience, but its format gives 2-mo delayed feedback on a small proportion of patients in clinic. Furthermore, it fails to give specific actionable results on individual encounters. OBJECTIVE To develop and assess the impact of a single-page Clinic Satisfaction Tool (CST) to demonstrate real-time feedback, individualized responses, interpretable and actionable feedback, improved patient satisfaction and communication scores, increased physician buy-in, and overall feasibility. METHODS We assessed CST use for 12 mo and compared patient-reported outcomes to the year prior. We assessed all clinic encounters for patient satisfaction, all physicians for CG-CAHPS global rating, and physician communication scores, and evaluated the physician experience 1 yr after implementation. RESULTS During implementation, 14 690 patients were seen by 12 physicians, with a 96% overall CST utilization rate. Physicians considered the CST superior to CG-CAHPS in providing immediate feedback. CG-CAHPS global scores trended toward improvement and were predicted by CST satisfaction scores (P < .05). CG-CAHPS physician communication scores were also predicted by CST satisfaction scores (P < .01). High CST satisfaction scores were predicted by high utilization (P < .05). Negative feedback dropped significantly over the course of the study (P < .05). CONCLUSION The CST is a low-cost, high-yield improvement to the current method of capturing the clinic experience, improves communication and satisfaction between physicians and patients, and provides real-time feedback to physicians. Clinic satisfaction tool, Patient satisfaction, Patient experience, CG-CAHPS, Neurosurgery clinic ABBREVIATIONS ABBREVIATIONS CG-CAHPS Clinician and Group Consumer Assessment of Healthcare Providers and Systems CST Clinic Satisfaction Tool IRB institutional review board PC physician communication SQUIRE Standards for Quality Improvement Reporting Excellence Up to half of all patients leave the clinic visit with an unvoiced need.1-3 This may be due to forgetting, feeling uncomfortable or intimidated, or running out of time. These complaints are problematic because provider–patient miscommunication can lead to suboptimal patient care, including decreased improvement of symptoms in some and even patient safety issues.2,4 Conversely, while patients who make more requests are seen as demanding by their providers, they are also more likely to have their needs fulfilled and be satisfied with the outcome of their visit.5-7 There have been many attempts to evaluate the patient experience, but most, including the nationally implemented Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS), do not address this major void in outpatient care.8-10 In addition, CG-CAHPS fails to deliver timely, actionable feedback since feedback is delivered months after a visit and performance is reported as percentage scores for each closed-ended question. Its validation in a primary care setting means results are not generalizable to specialties where there is a large burden of severe, chronic, and/or surgical disease.11-14 Patients find such surveys to be burdensomely long and the results laborious to understand.15-17 In addition, physicians can find these surveys unreliable, biased, or irrelevant to their practice.18-21 For example, orthopedic surgeons and neurosurgeons score worse compared to the general population of physicians on communication, but those low scores can be dependent on factors irrelevant to the actual episode of care such as scheduling and parking.21,22 There have been attempts to design better questionnaires on the inpatient side, but a recent study using an outpatient questionnaire was limited by the same low response rates of CG-CAHPS.10,23-25 Because of the failures of current methods, our team, with input from our nursing staff, nurse manager, and other providers in our clinic, designed the Clinic Satisfaction Tool (CST). We used an open-ended, minimalist approach to questionnaire design in order to seamlessly integrate the CST into each visit. We then implemented the CST in a multidisciplinary spine clinic and evaluated performance over 12 mo. The specific aims of the CST were to demonstrate real-time feedback, individualized responses, interpretable and actionable feedback, improved patient satisfaction and communication scores, increased physician buy-in, and overall feasibility. METHODS We received institutional review board (IRB) approval to perform a pre–post-intervention quality improvement study which included all physicians in the spine clinic who had a full year of CG-CAHPS baseline scores. Beginning January 2016, the CST was integrated into the daily clinic workflow and every patient was provided the CST. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence) guidelines were used for reporting our findings.26 Inclusion Criteria Physicians were included in the analysis if they were in: neurosurgery, orthopedic surgery, or physiatry; outpatient practice at the Durham, NC location of the Duke Spine Center from January 2015 to December 2016; and active practice for the majority of the 2 yr observed. Six neurosurgeons, 3 orthopedic surgeons, and 3 physiatrists were included for a total of 12 physicians (Table 1). Our physician cohort was mostly male, almost all fellowship-trained, and contained a wide range in years of practice, 3 to 35 yr. TABLE 1. Physician Demographics (n = 12) Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  View Large TABLE 1. Physician Demographics (n = 12) Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  Gender  9 (75%) male  Specialty  6 (50%) Neurosurgery    3 (25%) Orthopedics    3 (25%) Physiatry  Fellowship-trained  10 (83%)  Years in practice (median, range)  9 (3-35)  View Large All patients who presented for a visit within the Spine Center received a CST. Intervention The CST (Figure 1) is a patient-centered questionnaire assessing each patient's perception of provider communication and satisfaction with care. The CST also aims to give providers measurable and actionable feedback that they can incorporate into their daily practice. FIGURE 1. View largeDownload slide Clinic Satisfaction Tool. FIGURE 1. View largeDownload slide Clinic Satisfaction Tool. The CST is completed by every patient at each appointment as shown in Figure 2. While waiting for their provider, patients list their chief complaints in the top half of the form. The physician reviews the CST with the patient to ensure that all concerns are addressed. FIGURE 2. View largeDownload slide Clinic workflow with immediate feedback. FIGURE 2. View largeDownload slide Clinic workflow with immediate feedback. After the provider leaves, the patient completes the feedback section of the CST. The form is then collected by the nurse and returned to the physician immediately. If a patient responds “No” to the question “Were you satisfied with your visit?” or “Were all of your questions addressed today?” a rescue process is initiated and physicians return to the room or follow-up with the patient by phone within 24 h. This ensures that a real-time solution is reached expediently for any problems. The feedback from the CST is intended to lend insight into areas of improvement for both providers and the overall clinic. On a weekly and monthly basis, written comments from the CST are reported to each provider in the clinic. These reports included intervention utilization scores, Yes/No scores on each question, and written patient feedback. Data Collected For each physician, monthly CG-CAHPS reports from January 2015 to December 2016 were collected, including global rating and physician communication (PC) subscores. The CST was implemented in the Duke Spine Center in January 2016. The following information was collected from every patient: chief complaints; how their visit could be improved; satisfaction with visit (yes/no); and all questions answered (yes/no). The patient volume, the number of CSTs returned, the proportion of CSTs which were marked yes for satisfaction and questions answered, the number of comments, and the types of comments (positive, negative, not applicable) were calculated for each physician on a monthly basis. Implementation Assessment Eleven of the physicians included in the study (excluding the senior author) were emailed a Qualtrics survey asking them to consider the experiences of 3 stakeholders—themselves, patients, and nursing—to give a comprehensive evaluation the CST. The survey was sent 18 mo into implementation of the CST to assess its feasibility in clinic, its utility as a feedback tool, and provider and nursing satisfaction. Statistical Analysis We collected 12 mo of longitudinal data from 12 providers within 3 departments. Therefore, we used the complex linear mixed model to account for interprovider and interdepartmental variations; equal variance was assumed within departments. Kruskal-Wallis testing was used for individual categorical variables due to skewed outcome measure distributions, and univariate mixed model regression for continuous data. Mixed model multivariate regression was then performed for 3 outcome measures: CG-CAHPS global score; CG-CAHPS PC score CST satisfaction “Yes” score (proportion of patients who checked yes to satisfaction and/or questions answered out of those who completed the feedback section). For determination of significance, α was set to 0.05. Datasets were unified in Microsoft Excel (Microsoft Inc, Redmond, Washington), and all analyses performed with RStudio (RStudio Inc, Boston, Massachusetts). RESULTS CST Utilization During the first 12 mo of CST implementation, 14 690 patients were seen, of whom 95.6% returned the CST. Monthly utilization of the form by providers increased from below 85% to above 95% by the third month of the study and remained high throughout. This utilization rate was concordant with the physicians’ perception of patient usage (Tables, Supplemental Digital Content 1 and 2). The highest overall patient volume belonged to physiatry (54%), with the rest divided between neurosurgery (23%) and orthopedics (23%). Satisfaction Results CG-CAHPS Scores Over the 12 mo of the study, there was a trend toward improvement in the mean CG-CAHPS global score for all providers but these trends were not statistically significant (Figure 3). There was no significant change in CG-CAHPS PC scores for our cohort. FIGURE 3. View largeDownload slide CG-CAHPS global score, all providers, Jan 2015 to Jan 2016. Global score is given as a proportion. The CST was implemented beginning in January 2016. Overall R2 = 0.0008, P = .74 FIGURE 3. View largeDownload slide CG-CAHPS global score, all providers, Jan 2015 to Jan 2016. Global score is given as a proportion. The CST was implemented beginning in January 2016. Overall R2 = 0.0008, P = .74 CST Scores During the first 12 mo of CST collection, average provider satisfaction ratings were above 97% at all times. CST satisfaction scores demonstrated a clear trend toward improvement over time (P = .06). Among neurosurgeons, average ratings remained above 96% at all times. There was a decreasing trend in the percentage of negative comments received by all providers over the course of the study (Figure 4). Follow-up investigation of the rise in the last quarter revealed outlier results from 1 physician with a known external cause, and excluding that individual revealed a much stronger trend (P = .05). FIGURE 4. View largeDownload slide Negative comments on CST decrease over time. A single provider contributed the majority of negative comments in the last 3 mo of the study secondary to personal reasons. There was a decreasing trend in negative feedback throughout the study that is strengthened without the outlier provider. Red = adjusted, teal = original comment score. Original: R2 = 0.03, P = .57. Adjusted: R2 = 0.33, P = .05. FIGURE 4. View largeDownload slide Negative comments on CST decrease over time. A single provider contributed the majority of negative comments in the last 3 mo of the study secondary to personal reasons. There was a decreasing trend in negative feedback throughout the study that is strengthened without the outlier provider. Red = adjusted, teal = original comment score. Original: R2 = 0.03, P = .57. Adjusted: R2 = 0.33, P = .05. Predictors of Patient Satisfaction CG-CAHPS Global Predictors Each patient-reported outcome was tested across the 12-mo period for predictors among department, month, patient volume, CST utilization, and CST satisfaction score (Table 2). Neurosurgery had significantly higher baseline global scores compared to orthopedics and physiatry (P = .001). In univariate analysis, CG-CAHPS global rating was significantly predicted by CST satisfaction over our study period. Patient volume had a negative relationship with global ratings, with a drop of 0.5% per 20 additional patients per month, but this relationship was not significant and our data were not powered to detect a small effect at the 0.05 significance level. Twenty patients are roughly 1 additional clinic day. Conversely, CST satisfaction was a positive predictor of CG-CAHPS global rating (P = .03). In the multivariate model, global scores were best predicted by the effects of CST satisfaction (P = .03) after accounting for provider and department. Comment type was removed from the regression due to low comment counts. TABLE 2. Overall Predictors of Patient Satisfaction Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    aPost-hoc testing showed Neurosurgery received the highest average rating, followed by Orthopedics, then Physiatry. Department was used in the mixed model as a grouping variable, not as a predictor. bVariables with a univariate P-value < .20 were not included in the multivariate model. View Large TABLE 2. Overall Predictors of Patient Satisfaction Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    Outcome Measure  Variable  Univariate P-value  Multivariate P-value  CG-CAHPS Global Rating  Department  .001a  Ortho  .28        Physiatry  .13    Month  .78  b      Patient volume  .38  b      CST utilization  .27  b      CST satisfaction  .03  .03    CG-CAHPS Physician Communication  Department  .16  Ortho  .67        Physiatry  .99    Month  .11  0.09      Patient volume  .39  b      CST utilization  .73  b      CST satisfaction  .01  .009    CST Satisfaction Score  Department  .04a  Ortho  .93        Physiatry  .44    Month  .48  b      Patient volume  .75  b      CST utilization  .02  .05    aPost-hoc testing showed Neurosurgery received the highest average rating, followed by Orthopedics, then Physiatry. Department was used in the mixed model as a grouping variable, not as a predictor. bVariables with a univariate P-value < .20 were not included in the multivariate model. View Large CG-CAHPS PC Predictors In univariate analysis, CST satisfaction was positively correlated with the CG-CAHPS PC rating over our study period (P = .01); however, there was no significant difference between departments. On multivariate analysis, CST satisfaction remained a significant predictor of PC scores (P = .009). CST Satisfaction Predictors Neurosurgery scores exceeded other departments (P = .04). In univariate analysis, high CST utilization by the provider was associated with higher CST scores (P = .02). On multivariate analysis, utilization trended toward a positive predictor of CST satisfaction (P = .05). Implementation Assessment Survey Results Workflow and Communication Nine of the 11 physicians (82%) asked responded to our survey (Tables, Supplemental Digital Content 1-3) Overall, the providers included in the study used the CST in the manner intended: it was given to almost every patient, reviewed with the patient by the physician while in the room, and was considered to be helpful by physicians for communicating with patients. Physicians also felt that patients generally liked the tool and were able to communicate better with the CST, and that nurses were helpful in encouraging patients to leave feedback on the CST. CST Utility In the last section of the survey, physicians were asked to discuss their personal implementation of the CST and its utility. Seventy-five percent of physicians regularly used the individual CSTs as a source of patient feedback. However, only 38% used the monthly emailed CST report on a regular basis, with another 38% using it sometimes. In comparison, 88% of physicians used the monthly CG-CAHPS reports as a source of patient feedback. Physicians were then asked to compare the reports in utility: for half of the physicians, the CST and CG-CAHPS reports were equally useful. Three-quarters of the remainder found the CST to be more useful than CG-CAHPS, and one individual did not use either as a source of feedback. The reasons given and constructive feedback are listed in Table 3. TABLE 3. Physician Feedback Results   A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”      A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”    View Large TABLE 3. Physician Feedback Results   A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”      A. CST utility over CG-CAHPS    • “more specific”    • “It provides immediate opportunity to improve care”    • “allows patients to list [their] specific questions”    B. Constructive feedback    • “Need to make sure its reviewed prior to entering room and also prior to patient leaving the room”    • “I would like to see the nurse/cma encourage the patient to put feedback”    • “standardize workflow among all clinic staff on soliciting post-visit feedback” and    • “would be helpful to see questions before seeing patient”    View Large DISCUSSION This single-center, multidisciplinary pre–post study encompassing 12 physicians and 14 690 patients demonstrated that a single-page questionnaire can be easily implemented with almost perfect utilization using existing clinic infrastructure. Physicians felt that the CST was useful and provided individualized, actionable, and real-time feedback. CST scores trended with CG-CAHPS global and PC scores. CST global but not PC scores improved modestly over the course of the study. The highest CST performers were those who utilized the tool. CST Addresses CG-CAHPS Limitations The CST was designed to challenge specific problems with CG-CAHPS (Table 4). First, the shortest version of CG-CAHPS is 4 pages long, but could be substantially shortened without a significant loss of reliability.17,27 While ours is not the first program to develop a shorter method for collecting patient feedback, we are the first to demonstrate robust feedback return rates to 96%.10,24,28–30 Despite CG-CAHPS’s length, it fails to capture open-ended patient data.31 The long length contributes to poor return: at our institution, only 10% of patients respond to the global question, while other institutions report response rates between 9% and 58%.24,32-34 This low return risks nonresponse bias and challenges the validity of CAHPS results.35 TABLE 4. CST Design Features     CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content      CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content  View Large TABLE 4. CST Design Features     CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content      CG CAHPS Limitation  Clinic Satisfaction Tool feature  1. Long form discourages feedback, solicits few free text responses  Single sheet reduces form fatigue and free text is easy to use for every patient and clinician  2. Aggregated data limit association with particular patients  Every form is associated directly with the patient and encounter  3. Low buy-in from all stakeholders  Easy and efficient for both patients and providers to understand  4. Large delay between visits and results  Immediate, real-time feedback and actionable content  View Large Secondly, we returned to individualized feedback as aggregated reports eliminate any association of results with particular encounters. Aggregates are useful for studying trends, but blinding patient-level data leaves providers unable to interpret comments in the context of patient characteristics. Predictors of decreased satisfaction include lower age, less education, higher pain/disability scores, not speaking English, and uninsured or Medicaid payer status.20,22,36-43 Patients also prefer affective behavior and the spontaneous sharing of information from their providers, as well as providers who match their vocabulary.44,45 For neurosurgical and spine patients, satisfaction is negatively associated with nonsurgical treatment plans, due to either a lack of patient understanding or inadequate communication from the surgeon.20,36 When neurosurgeons clearly address the expected outcomes of a plan with a patient as facilitated by the CST, patients are more likely to have an optimal outcome and be satisfied with their care experience independent of their actual surgical outcomes.12,46,47 By having each feedback form available for review during the visit, physicians can address individual needs, contextualize CST feedback, and adjust their behavior for each patient. Thirdly, we addressed low buy-in to CG-CAHPS from both providers and patients. Barriers to feedback initiatives include poor team communication, a lack of buy-in from stakeholders, and difficult or cumbersome implementation.48-50 Initiatives with a low response rate like CG-CAHPS are felt to not be representative, and often it are not detailed enough to be actionable.35,51 CG-CAHPS has been cited as a poor metric in its current incarnation for the evaluation of physician performance and care quality.11,12,18 Even patients prefer to use other, simpler satisfaction measures to make decisions about choosing a physician.15,16 Furthermore, the PC subscore of CG-CAHPS fails to capture dimensions of the encounter that patients felt were important.52 Our intervention avoided these issues because the minimalist, single page format was easy for both patients and providers to use. This resulted in high return rates of 96%, eliminating concern for nonresponse bias. Additionally, the open-ended questions made the form relevant for all 3 specialties and the monthly CST reports were rated as more useful than CG-CAHPS reports. Lastly, we returned the temporal association between visits and results. Survey dissemination and feedback lags by 2 to 3 months for CG-CAHPS.53 Real-time feedback can be especially helpful and is often preferred where available, as providers can associate feedback with particular behaviors and encounters.24,28,54 For any feedback to be useful, there must be a pathway to improve.18 After barriers are minimized, feedback is useful for improving the patient clinic experience, especially for providers whose scores are low.34,55,56 The CST provides instant, real-time feedback after each visit, and patients can leave constructive comments associated with the visit. CST Captures Some Parts of CG-CAHPS A central goal in the design of CST was to assess the clinic experience while minimizing impact for staff and patients. The global rating for CG-CAHPS v3.0 is based solely on question 18 which asks patients to rate their provider on a 10-point scale. The PC score is based on 6 multiple choice questions with responses from never to always, which are as follows: knowing the patient's medical history; clear explanations; easy-to-understand information; careful listening; respect for patients’ words; and spending enough time.9 Despite being based on only 2 categories, CST satisfaction was a predictor of both global and PC scores. There is face validity in the relationship between visit satisfaction on the CST and overall impression of the provider. The global score question asks only: “Using any number from 0 to 10, where 0 is the worst provider possible and 10 is the best provider possible, what number would you use to rate this provider?” and credit for our metrics is only given for the top box response, a 10. The connection between CST satisfaction and communication is more complex, however. By soliciting chief complaints and providing a tangible copy for patients and providers to reference during a visit, the CST addressed questions 4 and 5. Even so, there was significant correlation between the 2 measures and the CST was equally or preferentially regarded by the physicians in this study due to the immediateness of feedback. This successfully accomplished one of the initial goals of our intervention. Notably, CST satisfaction polled much higher than CG-CAHPS scores at every time point. Due to differences in the timing of survey administration, the CST is better positioned than CG-CAHPS to assess the patient's visit experience. This is concordant with the current understanding that, compared to mail surveys, point-of-care surveys generate higher response rates and higher satisfaction scores through reduction of both recall bias and nonresponse bias.35,57 Limitations In our experience, the utility of the CST is most limited by the different levels of physician investment in the implementation process. Results are more robust for providers who are willing to discuss the CST with each of their patients and encourage patients to complete the form. Additionally, we noted that reduced utilization coincided with the hiring of new certified medical assistants, which poses a problem in clinics with high turnover. The distributions of the outcome variables are highly skewed, limiting the sensitivity of the regression analysis. Due to the small number of providers, our analysis is susceptible to outlier results as noted in the last 3 mo of the study and is limited in drawing conclusions on Orthopedics and Physiatry with 3 physicians each. This also means that we are underpowered to detect less dramatic relationships. Centers for Medicare and Medicaid Services policies preclude any patient surveys after the end of the visit, so delayed retesting for reliability was not possible. Finally, the PC section of the CG-CAHPS captures 4 categories, 2 of which are not addressed by the CST. Future Directions Based on the success of the CST within the spine clinic, we have expanded the CST to our brain tumor clinic, 2 other neurosurgical clinics, and outside of neurosurgery and spine as well. These collaborations have been very successful, without complaints on implementation from the new locations. This correlates with our initial findings that the CST presents a low burden to clinic staff and patients. Due to the open-ended nature of the assessment, there is little personalization required to implement the CST in other specialties and we have kept the CST free for research and clinical use. The only barriers are the training of clinic staff and the salary of an assistant to transcribe and aggregate written patient-reported data. By initiating the CST in an electronic format, a practice can minimize that cost. The ultimate role of this study is to serve as a successful example of a shorter, more useful, and well-received visit assessment. Practice groups can use the CST as a supplement and eventual replacement for CG-CAHPS. As patient assessment continuously evolves, assessment tools should be equally patient and provider friendly. CONCLUSION This yearlong implementation study demonstrated that the CST is a simple, short, and effective intervention for capturing the patient experience. The CST was able to provide real-time and interpretable feedback, individualized responses, modest improvement in satisfaction and communication scores, and substantial physician engagement. There are low barriers to expansion to new practices. We hope the CST inspires a move toward more straightforward, open-ended assessments of the patient experience that allow patients to truly communicate their needs to providers. Disclosures Research reported in this publication was supported by the National Center For Advancing Translational Sciences of the National Institutes of Health under Award Number TL1TR001116. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Duke Medicine IRB# Pro00081925 – Exempted Protocol. Dr Gottfried is a medical consultant for Pioneer Surgical Technology, Inc. The other authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article. REFERENCES 1. Barry CA, Bradley CP, Britten N, Stevenson FA, Barber N. Patients' unvoiced agendas in general practice consultations: qualitative study. BMJ . 2000; 320( 7244): 1246- 1250. Google Scholar CrossRef Search ADS PubMed  2. Bell RA, Kravitz RL, Thom D, Krupat E, Azari R. Unsaid but not forgotten: patients’ unvoiced desires in office visits. Arch Intern Med . 2001; 161( 16): 1977- 1984. Google Scholar CrossRef Search ADS PubMed  3. Low LL, Sondi S, Azman AB et al.   Extent and determinants of patients' unvoiced needs. Asia Pac J Public Health . 2011; 23( 5): 690- 702. Google Scholar CrossRef Search ADS PubMed  4. Jensen IB, Bodin L, Ljungqvist T, Gunnar Bergström K, Nygren A. Assessing the needs of patients in pain: a matter of opinion? Spine . 2000; 25( 21): 2816- 2823. Google Scholar CrossRef Search ADS PubMed  5. Bowling A, Rowe G, Lambert N et al.   The measurement of patients’ expectations for health care: a review and psychometric testing of a measure of patients’ expectations. Health Technol Assess . 2012; 16( 30): i- xii, 1-509. Google Scholar CrossRef Search ADS PubMed  6. Bowling A, Rowe G, McKee M. Patients' experiences of their healthcare in relation to their expectations and satisfaction: a population survey. J R Soc Med . 2013; 106( 4): 143- 149. Google Scholar CrossRef Search ADS PubMed  7. Kravitz RL, Bell RA, Azari R, Kelly-Reif S, Krupat E, Thom DH. Direct observation of requests for clinical services in office practice: what do patients want and do they get it? Arch Intern Med . 2003; 163( 14): 1673- 1681. Google Scholar CrossRef Search ADS PubMed  8. Barr DA, Vergun P. Using a new method of gathering patient satisfaction data to assess the effects of organizational factors on primary care quality. Jt Comm J Qual Improv . 2000; 26( 12): 713- 723. Google Scholar PubMed  9. Clinician & Group. Available at: https://www.ahrq.gov/cahps/surveys-guidance/cg/index.html. Accessed August 31, 2017. 10. Ferranti DE, Makoul G, Forth VE, Rauworth J, Lee J, Williams MV. Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med . 2010; 5( 9): 522- 527. Google Scholar CrossRef Search ADS PubMed  11. Glickman SW, Schulman KA. The mis-measure of physician performance. Am J Manag Care . 2013; 19( 10): 782- 785. Google Scholar PubMed  12. Godil SS, Parker SL, Zuckerman SL et al.   Determining the quality and effectiveness of surgical spine care: patient satisfaction is not a valid proxy. Spine J . 2013; 13( 9): 1006- 1012. Google Scholar CrossRef Search ADS PubMed  13. Hays RD, Chong K, Brown J, Spritzer KL, Horne K. Patient reports and ratings of individual physicians: an evaluation of the DoctorGuide and Consumer Assessment of Health Plans Study provider-level surveys. Am J Med Qual . 2003; 18( 5): 190- 196. Google Scholar CrossRef Search ADS PubMed  14. Solomon LS, Hays RD, Zaslavsky AM, Ding L, Cleary PD. Psychometric properties of a group-level Consumer Assessment of Health Plans Study (CAHPS) instrument. Med Care . 2005; 43( 1): 53- 60. Google Scholar PubMed  15. Kanouse DE, Schlesinger M, Shaller D, Martino SC, Rybowski L. How patient comments affect consumers' use of physician performance measures. Med Care . 2016; 54( 1): 24- 31. Google Scholar CrossRef Search ADS PubMed  16. Schlesinger M, Kanouse DE, Rybowski L, Martino SC, Shaller D. Consumer response to patient experience measures in complex information environments. Med Care . 2012; 50 Suppl: S56- S64. Google Scholar CrossRef Search ADS PubMed  17. Stucky BD, Hays RD, Edelen MO, Gurvey J, Brown JA. Possibilities for shortening the CAHPS clinician and group survey. Med Care . 2016; 54( 1): 32- 37. Google Scholar CrossRef Search ADS PubMed  18. Boiko O, Campbell JL, Elmore N, Davey AF, Roland M, Burt J. The role of patient experience surveys in quality assurance and improvement: a focus group study in English general practice. Health Expect . 2015; 18( 6): 1982- 1994. Google Scholar CrossRef Search ADS PubMed  19. Conrad F, Kreuter F. Memory and Recall: Length of Reference Period - University of Michigan . Available at: https://www.coursera.org/learn/questionnaire-design/lecture/P2AwF/memory-and-recall-length-of-reference-period. Accessed November 18, 2017. 20. Mazur MD, McEvoy S, Schmidt MH, Bisson EF. High self-assessment of disability and the surgeon's recommendation against surgical intervention may negatively impact satisfaction scores in patients with spinal disorders. J Neurosurg Spine . 2015; 22( 6): 666- 671. Google Scholar CrossRef Search ADS PubMed  21. Quigley DD, Elliott MN, Farley DO, Burkhart Q, Skootsky SA, Hays RD. Specialties differ in which aspects of doctor communication predict overall physician ratings. J Gen Intern Med . 2014; 29( 3): 447- 454. Google Scholar CrossRef Search ADS PubMed  22. Bible JE, Shau DN, Kay HF, Cheng JS, Aaronson OS, Devin CJ. Are low patient satisfaction scores always due to the provider? determinants of patient satisfaction scores during spine clinic visits. Spine . 2016. doi:10.1097/BRS.0000000000001453 [published online ahead of print]. 23. Indovina K, Keniston A, Reid M et al.   Real-time patient experience surveys of hospitalized medical patients. J Hosp Med . 2016; 11( 4): 251- 256. Google Scholar CrossRef Search ADS PubMed  24. Pena SM, Lawrence N. Analysis of wait times and impact of real-time surveys on patient satisfaction. Dermatol Surg . 2017; 43( 10): 1288- 1291. Google Scholar CrossRef Search ADS PubMed  25. Torok H, Ghazarian SR, Kotwal S, Landis R, Wright S, Howell E. Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med . 2014; 9( 9): 553- 558. Google Scholar CrossRef Search ADS PubMed  26. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf . 2016; 25( 12): 986- 992. Google Scholar CrossRef Search ADS PubMed  27. Drake KM, Hargraves JL, Lloyd S, Gallagher PM, Cleary PD. The effect of response scale, administration mode, and format on responses to the CAHPS Clinician and Group survey. Health Serv Res . 2014; 49( 4): 1387- 1399. Google Scholar CrossRef Search ADS PubMed  28. Alemi F, Jasper H. An alternative to satisfaction surveys: let the patients talk. Qual Manag Health Care . 2014; 23( 1): 10- 19. Google Scholar CrossRef Search ADS PubMed  29. Makoul G, Krupat E, Chang C-H. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns . 2007; 67( 3): 333- 342. Google Scholar CrossRef Search ADS PubMed  30. Biglino G, Koniordou D, Gasparini M et al.   Piloting the use of patient-specific cardiac models as a novel tool to facilitate communication during cinical consultations. Pediatr Cardiol . 2017; 38( 4): 813- 818. Google Scholar CrossRef Search ADS PubMed  31. Martino SC, Shaller D, Schlesinger M et al.   CAHPS and comments: how closed-ended survey questions and narrative accounts interact in the assessment of patient experience. J Patient Exp . 2017; 4( 1): 37- 45. Google Scholar CrossRef Search ADS PubMed  32. Bergeson SC, Gray J, Ehrmantraut LA, Laibson T, Hays RD. Comparing web-based with mail survey administration of the consumer assessment of healthcare providers and systems (CAHPS(®)) Clinician and Group Survey. Prim Health Care Open Access . 2013; 3. doi:10.4172/2167-1079.1000132. 33. Anastario MP, Rodriguez HP, Gallagher PM et al.   A randomized trial comparing mail versus in-office distribution of the CAHPS Clinician and Group Survey. Health Serv Res . 2010; 45( 5 Pt 1): 1345- 1359. Google Scholar CrossRef Search ADS PubMed  34. Riskind P, Fossey L, Brill K. Why measure patient satisfaction? J Med Pract Manage . 2011; 26( 4): 217- 220. Google Scholar PubMed  35. Perneger TV, Chamot E, Bovier PA. Nonresponse bias in a survey of patient perceptions of hospital care. Med Care . 2005; 43( 4): 374- 380. Google Scholar CrossRef Search ADS PubMed  36. Franz EW, Bentley JN, Yee PPS et al.   Patient misconceptions concerning lumbar spondylosis diagnosis and treatment. J Neurosurg Spine . 2015; 22( 5): 496- 502. Google Scholar CrossRef Search ADS PubMed  37. Chotai S, Sivaganesan A, Parker SL, McGirt MJ, Devin CJ. Patient-specific factors associated with dissatisfaction after elective surgery for degenerative spine diseases. Neurosurgery . 2015; 77( 2): 157- 163; discussion 163. Google Scholar CrossRef Search ADS PubMed  38. Murray-García JL, Selby JV, Schmittdiel J, Grumbach K, Quesenberry CP. Racial and ethnic differences in a patient survey: patients’ values, ratings, and reports regarding physician primary care performance in a large health maintenance organization. Med Care . 2000; 38( 3): 300- 310. Google Scholar CrossRef Search ADS PubMed  39. Hasnain M, Schwartz A, Girotti J, Bixby A, Rivera L UIC Experiences of Care Project Group. Differences in patient-reported experiences of care by race and acculturation status. J Immigrant Minority Health . 2013; 15( 3): 517- 524. Google Scholar CrossRef Search ADS   40. Rodriguez HP, Crane PK. Examining multiple sources of differential item functioning on the Clinician & Group CAHPS® survey. Health Serv Res . 2011; 46( 6 pt 1): 1778- 1802. Google Scholar CrossRef Search ADS PubMed  41. Menendez ME, Loeffler M, Ring D. Patient satisfaction in an outpatient hand surgery office: a comparison of English- and Spanish-speaking patients. Qual Manag Health Care . 2015; 24( 4): 183- 189. Google Scholar CrossRef Search ADS PubMed  42. Burt J, Abel G, Elmore N et al.   Understanding negative feedback from South Asian patients: an experimental vignette study. BMJ Open . 2016; 6( 9): e011256. Google Scholar CrossRef Search ADS PubMed  43. Mead N, Roland M. Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices. BMJ . 2009; 339( 3): b3450- b3450. Google Scholar CrossRef Search ADS PubMed  44. Leckie J, Bull R, Vrij A. The development of a scale to discover outpatients' perceptions of the relative desirability of different elements of doctors' communication behaviours. Patient Educ Couns . 2006; 64( 1-3): 69- 77. Google Scholar CrossRef Search ADS PubMed  45. Williams N, Ogden J. The impact of matching the patient's vocabulary: a randomized control trial. Fam Pract . 2004; 21( 6): 630- 635. Google Scholar CrossRef Search ADS PubMed  46. Soroceanu A, Ching A, Abdu W, McGuire K. Relationship between preoperative expectations, satisfaction, and functional outcomes in patients undergoing lumbar and cervical spine surgery. Spine . 2012; 37( 2): E103- E108. Google Scholar CrossRef Search ADS PubMed  47. Levin JM, Winkelman RD, Smith GA et al.   The association between the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey and real-world clinical outcomes in lumbar spine surgery. Spine J . 2017; 17( 11): 1586- 1593. Google Scholar CrossRef Search ADS PubMed  48. Käsbauer S, Cooper R, Kelly L, King J. Barriers and facilitators of a near real-time feedback approach for measuring patient experiences of hospital care. Health Policy Technol . 2017; 6( 1): 51- 58. Google Scholar CrossRef Search ADS PubMed  49. Carter M, Davey A, Wright C et al.   Capturing patient experience: a qualitative study of implementing real-time feedback in primary care. Br J Gen Pract . 2016; 66( 652): e786- e793. Google Scholar CrossRef Search ADS PubMed  50. Burt J, Campbell J, Abel G et al.   Improving Patient Experience in Primary Care: A Multimethod Programme of Research on the Measurement and Improvement of Patient Experience . Southampton (UK): NIHR Journals Library; 2017. Available at: http://www.ncbi.nlm.nih.gov/books/NBK436541/. Accessed August 30, 2017. 51. Asprey A, Campbell JL, Newbould J et al.   Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. Br J Gen Pract . 2013; 63( 608): 200- 208. Google Scholar CrossRef Search ADS   52. Quigley DD, Martino SC, Brown JA, Hays RD. Evaluating the content of the communication items in the CAHPS(®) clinician and group survey and supplemental items with what high-performing physicians say they do. Patient . 2013; 6( 3): 169- 177. Google Scholar CrossRef Search ADS PubMed  53. Castle NG, Brown J, Hepner KA, Hays RD. Review of the literature on survey instruments used to collect data on hospital patients' perceptions of care. Health Serv Res . 2005; 40( 6 pt 2): 1996- 2017. Google Scholar CrossRef Search ADS PubMed  54. Kneebone R, Nestel D, Yadollahi F et al.   Assessing procedural skills in context: exploring the feasibility of an integrated procedural performance instrument (IPPI). Med Educ . 2006; 40( 11): 1105- 1114. Google Scholar CrossRef Search ADS PubMed  55. Ivers N, Jamtvedt G, Flottorp S et al.   Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev . 2012;( 6): CD000259, DOI: 10.1002/14651858.CD000259.pub3. 56. Hurst D. Audit and feedback had small but potentially important improvements in professional practice. Evid Based Dent . 2013; 14( 1): 8- 9. Google Scholar CrossRef Search ADS PubMed  57. Gribble RK, Haupt C. Quantitative and qualitative differences between handout and mailed patient satisfaction surveys. Med Care . 2005; 43( 3): 276- 281. Google Scholar CrossRef Search ADS PubMed  Supplemental digital content is available for this article at www.neurosurgery-online.com. Supplemental Digital Content 1. Table. Physician Self-Assessment. Supplemental Digital Content 2. Table. Physician Assessment of Patients. Supplemental Digital Content 3. Table. Physician Assessment of Nursing. COMMENTS The authors report a process improvement effort designed to improve patient satisfaction in the outpatient portion of their practice. The have referenced the SQUIRE guidelines for reporting such efforts and the report is reasonably complete. Sufficient information is provided to allow the reader to replicate the intervention. The motivation for this exercise arose from some of the deficiencies of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) surveys used by the Center for Medicare Services and a number of other organizations to assess patient satisfaction. Lack of real-time reporting, basing results on small samples and not providing results that would allow interventions to correct problems for individual patients are identified by the authors and their intervention is specifically designed to address these issues. The effectiveness of the authors' implementation is impressive. Patient satisfaction as measured by the author's instrument improved over the period of its use. This intervention demonstrates the possibility that relatively simple, locally designed and implemented patient satisfaction surveys can be effective. However, perhaps more importantly, there is a problem in principle with statistical analysis of this type of data collected in a process improvement exercise. This type of analysis assumes that the data are a representative sample of a larger population to which the results are to be applied. The authors' impressive implementation indicates that they have included almost every patient in their practice. Therefore, their results are what they are and there is no need for statistical inference. If, in the future, they make changes in their use of the CST and track the results with similar completeness, they will know the effect on their patient population, again without the need for any statistical inference. A fundamental aspect of process improvement interventions is that they are local exercises conducted without intent to produce generalizable knowledge. The authors may wish to dissect their data to determine if there are any specific factors that are most important in achieving their objectives, but the structure and conduct of process improvement exercises implies that those findings are not intended to be applicable in other institutions. It is therefore unclear that the detailed statistical analysis should be published. Process improvement interventions are very important to optimizing clinical care. They must be encouraged. They do not ordinarily require approval by an ethics review body (Institutional Review Board or similar entity) because they are not considered research. However, the distinctive difference between a well documented process improvement intervention and a clinical research project is the intent to create generalizable knowledge. Where that intent exists, patients become subjects of research, ethical review and informed consent are required. This creates a conflict, because successful process improvement interventions in one institution might improve outcomes in another. Therefore, there is a substantial rationale for publication of such efforts. Some would then say that this moves the process improvement project to the category of clinical research, with a much more complicated set of requirements for the conduct and reporting of the results. Part of the confusion comes from the fact that much of the older neurosurgical literature consists essentially of informal process improvement projects (single institution case series documenting changes in practice related to changes in outcome) which have been considered and presented as “clinical research” because a clear differentiation of these 2 categories had not been made. A possible resolution of this conflict is to recognize the limitations in generalizability of process improvement projects and derive from their publication hypotheses about generalizable interventions that then must be tested in a formal clinical research study. Therefore, publications of process improvement interventions should focus on the details of implementation and the local results achieved but avoid analyses that suggest that generalizable knowledge has been produced. Taking these issues into account, I think this is a useful report of a process improvement project. The authors refer to the SQUIRE (Standards for Quality Improvement Reporting Excellence) and do a good job of satisfying them. However, the stray across the boundary into aspects of clinical research when they begin to apply tests of statistical inference (manifested as P values) to their observations about predictors of patient satisfaction and create multivariate regression models. If their intent is to create generalizable knowledge about patient satisfaction, this study should be viewed as a prospective cohort study and judged by the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines. For this purpose, the report falls short. There is no prespecified hypothesis to be tested, a more detailed description of their patient population would be required (to allow inference of applicability to other populations), deeper analyses of various potential confounding variables, limitations of the study and constraints on the generalization of results should be given. As a result, the conclusions should simply be that in this practice all but one of the neurosurgeons had a good experience with this immediate patient feedback method and suggest that other neurosurgeons may want to try it. Stephen J. Haines Youssef Hamade Minneapolis, Minnesota The authors present an instrument to gauge patient satisfaction with outpatient encounters. The questionnaire is short, inexpensive, readily integrated into clinic activities and provides real-time feedback. They have also validated it against more comprehensive questionnaires. Furthermore, the authors have tested the questionnaire for outpatient spinal patients in their home institution and documented ease of use and a high completion rate. The open-ended questions invite useful suggestions to improve patient care. Our hospitals and insurance companies are increasingly demanding reports of patient satisfaction with our care. CST seems to address those demands admirably. What is best is that its use promotes follow-up with potentially-unsatisfied patients and thus the opportunity to respond to concerns about the clinical encounter. Finally, there is the suggestion that its use is associated with improved provider performance and improved patient satisfaction over time. Sherman C. Stein Philadelphia, Pennsylvania Copyright © 2018 by the Congress of Neurological Surgeons

Journal

NeurosurgeryOxford University Press

Published: Apr 14, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off