Key Points Question What is the association IMPORTANCE The use of electronic health records (EHRs) is directly associated with physician between electronic health record use burnout. An underlying factor associated with burnout may be EHR-related fatigue owing to and physician fatigue and efficiency? insufficient user-centered interface design and suboptimal usability. Findings In this cross-sectional study of 25 physicians completing 4 simulated OBJECTIVE To examine the association between EHR use and fatigue, as measured by pupillometry, cases of intensive care unit patients in and efficiency, as measured by mouse clicks, time, and number of EHR screens, among intensive care the electronic health record, all unit (ICU) physicians completing a simulation activity in a prominent EHR. physicians experienced fatigue at least once and 80% experienced fatigue DESIGN, SETTING, AND PARTICIPANTS A cross-sectional, simulation-based EHR usability within the first 22 minutes of electronic assessment of a leading EHR system was conducted from March 20 to April 5, 2018, among 25 ICU health record use, which was associated physicians and physician trainees at a southeastern US academic medical center. Participants with less efficient electronic health completed 4 simulation patient cases in the EHR that involved information retrieval and task record use (more time, more clicks, and execution while wearing eye-tracking glasses. Fatigue was quantified through continuous eye pupil more screens) on the subsequent data; EHR efficiency was characterized through task completion time, mouse clicks, and EHR screen patient case. visits. Data were analyzed from June 1, 2018, to August 31, 2019. Meaning Physicians experience MAIN OUTCOMES AND MEASURES Primary outcomes were physician fatigue, measured by electronic health record–related fatigue pupillometry (with lower scores indicating greater fatigue), and EHR efficiency, measured by task in short periods of continuous electronic completion times, number of mouse clicks, and number of screens visited during EHR simulation. health record use, which may be associated with inefficient and RESULTS The 25 ICU physicians (13 women; mean [SD] age, 33.2 [6.1] years) who completed a suboptimal electronic health record use. simulation exercise involving 4 patient cases (mean [SD] completion time, 34:43 [11:41] minutes) recorded a total of 14 hours and 27 minutes of EHR activity. All physician participants experienced Supplemental content physiological fatigue at least once during the exercise, and 20 of 25 participants (80%) experienced physiological fatigue within the first 22 minutes of EHR use. Physicians who experienced EHR-related Author affiliations and article information are listed at the end of this article. fatigue in 1 patient case were less efficient in the subsequent patient case, as demonstrated by longer task completion times (r = −0.521; P = .007), higher numbers of mouse clicks (r = −0.562; P = .003), and more EHR screen visits (r = −0.486; P = .01). CONCLUSIONS AND RELEVANCE This study reports high rates of fatigue among ICU physicians during short periods of EHR simulation, which were negatively associated with EHR efficiency and included a carryover association across patient cases. More research is needed to investigate the underlying causes of EHR-associated fatigue, to support user-centered EHR design, and to inform safe EHR use policies and guidelines. JAMA Network Open. 2020;3(6):e207385. Corrected on June 24, 2020. doi:10.1001/jamanetworkopen.2020.7385 Open Access. This is an open access article distributed under the terms of the CC-BY License. JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 1/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency Introduction 1,2 Use of electronic health records (EHRs) is directly associated with physician burnout. Many 1,3 physicians have voiced dissatisfaction with the click-heavy, data-busy interfaces of existing EHRs. Other factors associated with EHR frustration include scrolling through pages of notes and navigating through multiscreen workflows in the search for information. Excess EHR screen time leads to emotional distress in physicians and limits face-to-face contact with patients, resulting in higher rates 5,6 of medical errors. Thus, common attitudes among physicians toward the EHR include 7 8 9 “inefficient,” “time-consuming,” and “exhausting.” 6,10 Patient safety and quality of care depend on EHR usability. This fact is especially true in intensive care units (ICUs), where critically ill patients generate, on average, more than 1200 individual data points each day, and it has been estimated that ICU clinicians monitor about 187 alerts per patient per day, mostly through the EHR. Poor EHR design exacerbates this cycle, 6 6,13 potentially affecting decision-making and causing delays in care, medical errors, and 14-16 unanticipated patient safety events, especially in high-risk environments. Despite the challenges of today’s EHR interfaces, much work remains to achieve truly user-centered EHR systems with better designs that improve efficiency (ie, mouse clicks and time), streamline decision-making 17,18 processes, and support patient safety. Whereas traditional EHR usability testing often focuses on intrinsic, vendor-specific aspects of the system (such as screen layouts and workflows), it is important to distinguish EHR efficiency as extrinsic and dynamic—as much a function of the user as the system itself. Eye tracking, the study of movements of the eyes, and pupillometry, the measurement of pupil dilation, have been applied in many nonclinical domains. Eye-tracking research, which typically analyzes fixation duration, gaze points, and fixation counts, has been used to investigate users’ engagement with advanced interfaces and website design, as well as visual attention in video 20-22 games. In biomedicine, eye-tracking techniques have mostly been used to understand factors associated with interpretation of radiology studies, identification of medication allergies, reading 23-26 progress notes in the EHR, and physician attention during cardiopulmonary bypass. Pupillometry, however, remains underused in medical research despite its promising capabilities. The degree of pupillary constriction during a task is a validated biomarker for fatigue and 27,28 alertness. Research has consistently shown that during conditions of fatigue, baseline pupil 29-33 diameters are smaller than normal. Reduction in pupil size by 1 mm has been associated with signs of tiredness. Change in pupil diameters is typically small, ranging between 0.87 and 1.79 mm from normal pupil size. In 1 study, significant correlations were found between individual differences in pupil size and mental workload for patients with anxiety, suggesting an association between these 2 indicators. Despite the potential of these technologies, eye tracking and pupillometry have yet to be used to understand EHR-related fatigue and its association with the user experience for clinicians. The purpose of this study was to examine the association between EHR use and fatigue, as measured by pupillometry, and efficiency, as measured by completion time, mouse clicks, and number of EHR screens, among ICU physicians completing a simulation activity in a prominent EHR. Methods We conducted a cross-sectional, simulation-based EHR usability assessment of a leading EHR system (Epic; Epic Systems) among ICU physicians and physician trainees at a southeastern US academic medical center, after approval from the University of North Carolina at Chapel Hill Institutional Review Board. Details of our study methods have been reported previously. Testing took place from March 20 to April 5, 2018. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline. Participants provided written consent. JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 2/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency Study Setting and Participants The study was conducted at a southeastern US tertiary academic medical center with a 30-bed medical ICU. We recruited participants through departmental emails and flyers. The eligibility criteria were: (1) medical ICU physicians (ie, faculty or trainee), (2) any previous experience using Epic in critical care settings, and (3) not wearing prescription glasses at the time of the study, to avoid interference with the eye-tracking glasses. We recruited 25 medical ICU physicians for this study. Our sample exceeded the conventional usability study standards that recommend 5 to 15 participants to reveal 85% to 97% of usability 37,38 issues. All testing took place in an onsite biobehavioral laboratory designed for simulation-based studies, equipped with a computer workstation with access to institutional EHR training environment (Epic Playground), away from the live clinical environment. The computer screen was the standard screen clinicians use in their practice setting, with appropriate ergonomic placement, ambient lighting, and seating. Participants were recruited for a 1-hour individual session. Prior to each session, the principal investigator (S.K.) explained the study protocol to participants, assuring them that our study aim was to assess EHR efficiency rather than their clinical knowledge. We asked participants to wear eye-tracking glasses (Tobii Pro Glasses 2; Tobii AB; eFigure 1 in the Supplement), which are extremely lightweight and do not impair vision. On sitting at the work station, the glasses were calibrated for each participant to establish individual baseline pupil size. Each participant then logged into the EHR training environment and completed, in sequence, the same 4 ICU patient cases, which were developed by a domain expert (T.B.) and physician trainee (C.C.), as published previously. Participants were asked to review a patient case (eTable 1 in the Supplement) and notify the research assistant when they completed their review. At that point, the research assistant asked the participant a series of interactive questions that involved verbal responses as well as completing EHR-based tasks. There were 21 total questions and tasks across the 4 patient cases (eTable 1 in the Supplement). Pupil diameter was recorded continuously during the entire study, and all participants used the same eye-tracking glasses. After participants completed the 4 cases, they removed the eye-tracking glasses, indicating the end of the study. Each participant received a $100 gift card on completion. Outcomes Primary outcomes were physician fatigue, measured by pupillometry (with lower scores indicating greater fatigue), and EHR efficiency, measured by completion time, number of mouse clicks, and number of screens visited during EHR simulation. Measurements Quantification of Fatigue Fatigue was measured on a scale from −1 to 1, as advised by an eye-tracking specialist, with lower scores than baseline indicating signs of fatigue, and negative scores (between 0 and −1) indicating actual physiological fatigue. Simulation sessions occurred across a mix of conditions (morning and afternoon), with some participants undergoing testing on a day off or nonclinical day and other participants coming from a clinical shift in the medical ICU. Thus, to account for individual differences in baseline pupil size, we calculated a baseline for each participant, defined as the participant’s mean pupil size for the first 5 seconds during calibration. We then determined acute changes in pupil size during the simulation exercise by subtracting each participant’s baseline pupil size from his or her pupil size for each question or case. For each participant, we analyzed changes in pupil size to generate fatigue scores associated with the EHR simulation exercise by question and by case, according to the equations: Fatigue per question: Left or Right Eye Fatigue Score = (Mean of Pupil Size During Last 5 Seconds of Answering a Given Question) − (Mean of Pupil Size During First 5 Seconds of Asking a Given Question). JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 3/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency Fatigue per case: Left or Right Eye Fatigue Score = (Mean of Pupil Size During Last 5 Seconds of the Case) − (Mean of Pupil Size During First 5 Seconds of Entire Case). Total Fatigue Score = [(Right Eye Fatigue Score) + (Left Eye Fatigue Score)]/2. Quantification of EHR Efficiency We measured EHR efficiency by using standard usability software that ran in the background during the simulation exercises (TURF; University of Texas Health Science Center). This software includes a toolkit to capture task completion time, number of mouse clicks, and number of visited EHR screens for each case. Statistical Analysis Data were analyzed from June 1, 2018, to August 31, 2019. We calculated summary and descriptive statistics for the primary outcome measures of fatigue and EHR efficiency, including subgroup analysis by sex and clinical role. To explore the association between fatigue and efficiency, we calculated Pearson correlation coefficients between fatigue scores and the EHR efficiency measures (time, mouse clicks, number of EHR screens visited). All analysis was performed in SPSS, version 22.0 (SPSS Inc). All P values were from 2-sided tests and results were deemed statistically significant at P < .05. Results We recorded a total of 14 hours and 27 minutes of EHR activity across 25 ICU physicians (13 women; mean [SD] age, 33.2 [6.1] years) who completed a simulation exercise involving 4 patient cases (mean [SD] completion time, 34:43 [11:41] minutes) (Table). There was an uneven distribution by clinical role, with more resident physicians (n = 11) and fellows (n = 9) than attending physicians (n = 5). Mean (SD) age tended to mirror clinical role, with residents being the youngest group (29.0 [1.4] years; fellows, 32.7 [0.5] years; and attending physicians, 44.0 [6.5] years). An inverse trend was noted between clinical role and the mean (SD) self-reported time spent per week using the EHR, with residents spending the most time (41.2 [13.5] hours) and attending physicians spending the least (8.3 [7.2] hours). The mean self-reported years’ experience with Epic was similar across all 3 clinical roles. Physician Fatigue All participants experienced actual physiological fatigue at least once throughout the EHR simulation exercise, as evidenced by a negative fatigue score. Total fatigue scores for participants ranged from −0.804 to 0.801 (eTable 2 in the Supplement). Fatigue scores varied by case and by question or task. Figure 1 shows the distribution of physicians experiencing fatigue at the question level, ranging from 4 of 25 (16%) for relatively simple tasks involving basic information retrieval (“What was the patient’s last outpatient weight prior to this ICU admission?”) to 15 of 25 (60%) for tasks involving clinical ambiguity (“Reconcile a possibly spurious lab value”). Fifteen participants (60%) experienced fatigue by the end of reviewing case 3. Cumulative Fatigue Over Time Figure 2 shows the cumulative percentage of participants who experienced actual physiological fatigue at least once during the study, where each participant is counted as experiencing fatigue from the first instance. A total of 9 of 25 participants (36%) experienced fatigue within the first minute of the study; 16 of 25 participants (64%) experienced fatigue at least once within the first 20 minutes of the study, and 20 of 25 participants (80%) experienced fatigue after 22 minutes of EHR use. A sensitivity analysis was performed, in which we counted the second instance an individual JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 4/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 5/13 Table. Study Participant Demographic Characteristics, Descriptive Variables, and EHR Efficiency Variables Experience Time using Case fatigue scores, median (range) EHR efficiency, mean (SD) with EHR EHR system, Age, system, mean (SD), Case 1: Case 2: Case 3: Case 4: a a Variable No. mean (SD), y mean (SD), y h/wk multiorgan failure respiratory failure severe sepsis volume overload Time, min:s Mouse clicks, No. EHR screens, No. Total 25 33.2 (6.1) 4.2 (1.3) 32.6 (23.0) 0.075 (−0.183 to −0.015 (−0.804 to −0.033 (−0.789 to −0.030 (−0.464 to 34:43 (11:41) 304 (79) 85 (19) c c c 0.409) 0.634) 0.260) 0.801) Clinical role Resident 11 29.0 (1.4) 4.0 (0.4) 41.2 (13.5) 0.016 (−0.183 to −0.122 (−0.804 to −0.087 (−0.789 to −0.076 (−0.464 to 36:54 (14:43) 411.6 (90) 94 (21) c c c 0.231) 0.223) 0.260) 0.801) Fellow 9 32.7 (0.5) 5.7 (0.9) 39.3 (22.2) 0.053 (−0.103 to 0.002 (−0.267 to −0.030 (−0.223 to 0.001 (−0.226 to 28:51 (05:52) 312.7 (88) 81 (16) 0.409) 0.634) 0.173) 0.207) Attending 5 44.0 (6.5) 3.8 (0.4) 8.3 (7.2) 0.091 (−0.141 to 0.132 (−0.04 to 0.100 (−0.074 to 0.007 (−0.178 to 40:28 (06:10) 316.8 (71) 73 (8) physician 0.227) 0.247) 0.145) 0.361) Sex Female 13 31.5 (3.1) 4.0 (1.0) 35.3 (25.4) 0.043 (−0.156 to −0.024 (−0.267 to −0.043 (−0.314 to 0.001 (−0.225 to 31:37 (08:22) 355 (101) 89 (26) c c −0.061) 0.061) −0.054) 0.600) Male 12 34.9 (7.6) 4.3 (1.4) 29.8 (17.9) 0.086 (−0.183 to 0.067 (−0.267 to −0.052 (−0.789 to −0.088 (−0.463 to 38:04 (13:40) 301 (66) 87 (15) c c 0.225) 0.061) −0.616) −0.102) Abbreviation: EHR, electronic health record. Score range, –1 to 1; lower scores indicate more fatigue; higher scores indicate less fatigue. a c Self-reported variables. Negative scores (<0.0) suggest actual physiological fatigue. JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency experienced fatigue, and findings remained robust as 19 of 25 participants (76%) experienced a second instance of fatigue within 1 minute of the first instance (Figure 2). Figure 3 shows the distribution of physician fatigue scores at the case level, stratified by sex and clinical role. Across all participants, mean fatigue scores remained similar from 1 case to the next and tightly clustered around 0; however, we did see some variation. Overall fatigue scores were negative for cases 2 and 3. Although there were differences in mean scores across different subgroups, these differences were not statistically significantly different (Figure 3). Efficiency Participants completed the study in a mean (SD) of 34:43 (11:41) minutes, using 304 (79) mouse clicks, and visiting 85 (19) EHR screens (Table). Female physicians were faster than male physicians (mean [SD], 31:37 [8:22] vs 38:04 [13:40] minutes) but required more mouse clicks (mean [SD], 355  vs 301 ). Fellows were faster (mean [SD], 28:51 [5:52] vs 36:54 [14:43] minutes) and more efficient (mean [SD], 312.7  vs 411.6  mouse clicks) compared with residents. Attending physicians visited the fewest EHR screens compared with fellows and residents (mean [SD], 73  vs 81  vs 94 ). None of the observed sex- or role-based differences in EHR efficiency reached Figure 1. Percentage of Participants (N = 25) Experiencing Fatigue by Question 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Case 1-multiorgan Case 2-respiratory Case 3-sepsis Case 4-volume failure failure overload See eTable 1 in the Supplement for description of tasks Question and questions. Figure 2. Cumulative Percentage of Users Experiencing Fatigue for the First and Second Instance During Electronic Health Record Simulation (N = 25) First fatigue Second fatigue 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Task time, min JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 6/13 Participants, % Physicians experiencing fatigue, % JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency statistical significance. One participant spent noticeably more time than the mean on the simulation task (approximately 73 minutes compared with a mean of approximately 34 minutes). Sensitivity analyses conducted with the omission of this participant led to no significant differences in study findings. The Carryover Association of EHR-Related Fatigue With Physician Efficiency Physicians’ EHR efficiency was negatively associated with having experienced EHR-related fatigue. We observed a pattern in physicians’ EHR use after experiencing fatigue in 1 case such that the subsequent case required more time, mouse clicks, and EHR screen visits to complete, irrespective of the nature or order of the case. These results suggest a carryover association: when participants experienced greater fatigue during 1 patient case (as evidenced by more negative fatigue scores), they were less efficient using the EHR during the subsequent patient case. Figure 4A and B provide scatterplots mapping these associations. Significant negative correlations were found between: fatigue scores for case 2 and the number of mouse clicks in case 3 (r = −0.481; P = .01), fatigue scores for case 3 and the number of mouse clicks in case 4 (r = −0.562; P = .003), fatigue scores in case 3 and the time to complete case 4 (r = −0.521; P = .007), and fatigue scores in case 3 and the number of EHR screens visited in case 4 Figure 3. Distribution of Physician Fatigue Scores During Electronic Health Record Activity by Sex and Role A Case of a 44-year-old woman with multiorgan failure B Case of a 60-year-old woman with respiratory failure 0.6 0.6 0.4 0.4 0.2 0.2 0 0 –0.2 –0.2 –0.4 –0.4 –0.6 –0.6 All Male Female Resident Fellow Attending All Male Female Resident Fellow Attending (n = 25) (n = 12) (n = 13) (n = 11) (n = 9) physician (n = 25) (n = 12) (n = 13) (n = 11) (n = 9) physician (n = 5) (n = 5) C Case of a 25-year-old man with sepsis D Case of a 56-year-old man with volume overload 0.6 0.6 0.4 0.4 0.2 0.2 0 0 –0.2 –0.2 –0.4 –0.4 –0.6 –0.6 All Male Female Resident Fellow Attending All Male Female Resident Fellow Attending (n = 25) (n = 12) (n = 13) (n = 11) (n = 9) physician (n = 25) (n = 12) (n = 13) (n = 11) (n = 9) physician (n = 5) (n = 5) A, Case of a 44-year-old woman with multiorgan failure. B, Case of a 60-year-old woman bottom bars indicate the first and third quartile, respectively; the diamond indicates the with respiratory failure. C, Case of a 25-year-old man with sepsis. D, Case of a 56-year-old mean; the horizontal line in the bars indicate the median; and vertical lines indicate man with volume overload. Lower fatigue scores indicate greater fatigue. The top and minimum and maximum values. JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 7/13 Fatigue score Fatigue score Fatigue score Fatigue score JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency (r = −0.486; P = .01). The association between fatigue scores for case 1 and the number of EHR screens visited in case 2 was not significant (r = −0.381; P = .06). Our sensitivity analysis of the carryover showed similar patterns. When removing outliers, we observed the same negative correlations between fatigue scores and efficiency measures in the subsequent cases, as shown in Figure 4 and eFigure 2 and eTable 3 in the Supplement. Discussion To our knowledge, this cross-sectional, simulation-based EHR usability study is the first to use pupillometry to assess the association of EHR activity with fatigue and efficiency among ICU physicians. We report that 20 of 25 physician participants (80%) experienced physiological fatigue at least once in 22 minutes of EHR use, as measured by pupillometry. Experiencing EHR-related fatigue was negatively associated with EHR efficiency as measured by time, mouse clicks, and screen visits. We observed a carryover association: when participants experienced greater fatigue during 1 patient case, they were less efficient using the EHR during the subsequent patient case. There was an inverse association and a temporal component between fatigue scores and multiple domains of EHR efficiency spanning patient cases. This finding was most consistent with mouse clicks: across multiple sets of consecutive cases, lower fatigue scores on 1 case (indicating greater physiological fatigue) were associated with more mouse clicks on the subsequent case. To a lesser degree, we also observed an association between greater physiological fatigue during 1 case and needing more time and more screen visits in the subsequent case, although this pattern was limited to just 1 set of consecutive patient cases. These findings are hypothesis-generating, especially from the standpoint of the patient: if clinicians experience EHR-induced fatigue during the care of 1 patient, it may be associated with the care of the next patient in ways that are worthy of further investigation. When compared with a typical day in an ICU, the simulation undertested the clinical demands of a physician. First-year trainees routinely review 5 or more patients, while upper-level residents, fellows, and attending physicians routinely review 12 or more patients. Even small differences in EHR efficiency measures during a single patient case, such as 10 to 20 mouse clicks or 30 to 60 seconds, could be clinically significant to a busy physician when scaled to a typical workload of 12 or more patients. Thus, the preliminary findings of this study may be increasingly pronounced as the number of patients reviewed in the EHR rises. Figure 4. Association Between Fatigue Score in 1 Case and Electronic Health Record (EHR) Efficiency in the Subsequent Case A Case 4 EHR mouse clicks B Case 4 EHR screens viewed 160 35 20 5 –0.4 –0.2 0 0.2 0.4 –0.4 –0.2 0 0.2 0.4 Case 3 fatigue score Case 3 fatigue score A, Fatigue score in case 3 and efficiency (number of mouse clicks) in case 4. B, Fatigue score in case 3 and efficiency (number of EHR screens viewed) in case 4. JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 8/13 Case 4 EHR mouse clicks Case 4 EHR screens viewed JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency Previous Research Findings Prior studies using pupillometry in EHR simulation have examined physician workload (pupil dilation) among emergency department and hospitalist physicians as well as physician workload (blink rates) 39-42 among primary care physicians managing follow-up test results in the outpatient setting. Our study adds value by using pupillometry to characterize physician fatigue among intensivists managing critically ill patients, a particularly high-stakes setting. We also add nuance by extending our analysis to examine physician fatigue and EHR efficiency over time and across multiple cases, 27,43 which mirrors the reality of clinical workflows in most inpatient settings. The finding that physiological fatigue appears to occur in short periods of EHR-related work among physicians is itself an important advancement, given that fatigue is one of the leading human factors associated with 44,45 46 errors and accidents in the workplace and that it can co-occur with burnout. Strengths and Limitations This study has some strengths, including the use of high-fidelity patient cases and clinically relevant interactive tasks, inclusion of physicians from different levels of training and clinical experience, the use of a leading EHR system, and the relatively large sample size (n = 25) that exceeds the typical threshold for usability studies. Furthermore, our approach to identifying and quantifying fatigue is a conservative one because we use relative pupil size changes and baseline testing rather than instantaneous (absolute) changes, so our findings may understate the actual physiological burden of EHR- related fatigue. There are limitations in the study methods, procedures, and analysis that could potentially lead to the misinterpretation of findings. First, as this was a single-site study, we cannot exclude the possibility of selection bias, although we aimed to achieve a balance of sex representation and clinical roles. Second, cases were not randomized between participants in the simulation task, so it is possible that the observed fatigue was associated with case order. We also did not control for case- level features such as clinical acuity or number of tasks that might have explained the differences in time, number of EHR screens, and mouse clicks. However, in the natural clinical environment, there will always be variation in case complexity and task requirements from one patient to the next, so we wanted to mimic clinical workflows in the real world. Third, because all participants used the same eye-tracking glasses, there is the possibility of nondifferential measurement bias in the pupillometry data, which would introduce a conservative bias. Fourth, we did not collect subjective measures of fatigue from participants, as doing so for each case and question would have interrupted the flow of the study. Thus, we are unable to analyze the moment-to-moment association between objective fatigue, which we report, and subjective fatigue, which may be more clinically relevant. Fifth, in one case, the eye-tracking built-in battery died, which required an interruption to the activity. Future Directions These findings open the door for many potential research questions and opportunities for future work. Although we observed fatigue among participants using the EHR, it is unknown whether this fatigue was simply owing to the challenging nature of reviewing cases of critically ill patients or whether certain aspects of EHR design such as screen layouts or workflows played a role. Future research is needed to better understand the complex association between EHR-related fatigue and care outcomes. Additional work should randomize case order and should evaluate differences in perceived satisfaction and physiological fatigue levels since our preliminary findings may show a discrepancy in perceived and actual EHR association. Furthermore, testing should be expanded to include clinical practitioners from other roles whose work is EHR-intensive such as nursing, respiratory therapy, and social work. Finally, additional work is needed to better understand the association of user-centered design with EHR performance, satisfaction, usability, and patient outcomes. JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 9/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency Conclusions We observed high rates of fatigue among ICU physicians during short periods of EHR simulation, which was negatively associated with EHR efficiency and included a carryover association across patient cases. More research is needed to investigate the underlying causes of EHR-associated fatigue, to support user-centered EHR design, and to inform safe EHR use policies and guidelines. ARTICLE INFORMATION Accepted for Publication: March 11, 2020. Published: June 9, 2020. doi:10.1001/jamanetworkopen.2020.7385 Correction: This article was corrected on June 24, 2020, to fix errors in demographic data in the Results section of the abstract and text and in Table 1. Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Khairat S et al. JAMA Network Open. Corresponding Author: Saif Khairat, PhD, MPH, Carolina Health Informatics Program, University of North Carolina at Chapel Hill, 438 Carrington Hall, Chapel Hill, NC 27514 (email@example.com). Author Affiliations: Carolina Health Informatics Program, University of North Carolina at Chapel Hill (Khairat, Jayachander); School of Nursing, University of North Carolina at Chapel Hill (Khairat); Department of Preventive Medicine, University of North Carolina at Chapel Hill (Coleman); Gilling’s School of Public Health, University of North Carolina at Chapel Hill (Ottmar); Pulmonary Diseases and Critical Care Medicine, University of North Carolina at Chapel Hill (Bice, Carson). Author Contributions: Drs Khairat and Coleman had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: Khairat, Coleman, Bice, Carson. Acquisition, analysis, or interpretation of data: Khairat, Coleman, Ottmar, Jayachander, Carson. Drafting of the manuscript: Khairat, Coleman, Ottmar, Jayachander. Critical revision of the manuscript for important intellectual content: Khairat, Coleman, Bice, Carson. Statistical analysis: Khairat, Coleman, Ottmar. Administrative, technical, or material support: Ottmar. Supervision: Khairat, Bice, Carson. Conflict of Interest Disclosures: Dr Carson reported receiving grants from Biomarck Pharmaceuticals outside the submitted work. No other disclosures were reported. Funding/Support: This work was supported by grant 1T15LM012500-01 from the National Library of Medicine, which supports Dr Coleman in postdoctoral informatics training. Role of the Funder/Sponsor: The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Additional Contributions: We acknowledge the efforts of Donald Spencer, CMIO, and the Epic team at University of North Carolina Health for their efforts with building Epic cases, and for providing personnel support and dedicated server space to run our study. We also acknowledge the Biobehavioral Lab at the School of Nursing and CHAI Core at the University of North Carolina at Chapel Hill for providing the research facility and technical support, as well as research assistants Thomas Newlin, Victoria Rand, and Lauren Zalla for their assistance with data collection and analysis, and Katherine Martin, eye-tracking specialist. None of these individuals were compensated. REFERENCES 1. Downing NL, Bates DW, Longhurst CA. Physician burnout in the electronic health record era: are we ignoring the real cause? Ann Intern Med. 2018;169(1):50-51. doi:10.7326/M18-0139 2. Kapoor M. Physician burnout in the electronic health record era. Ann Intern Med. 2019;170(3):216-216. doi:10. 7326/L18-0601 JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 10/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency 3. Grabenbauer L, Skinner A, Windle J. Electronic health record adoption—maybe it’s not about the money: physician super-users, electronic health records and patient care. Appl Clin Inform. 2011;2(4):460-471. doi:10. 4338/ACI-2011-05-RA-0033 4. Gawande A. Why doctors hate their computers. The New Yorker. November 5, 2018. Accessed December 15, 2019. https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers 5. Tawfik DS, Profit J, Morgenthaler TI, et al. Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. Mayo Clin Proc. 2018;93(11):1571-1580. doi:10.1016/j.mayocp.2018.05.014 6. Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Sittig DF. Information overload and missed test results in electronic health record-based settings. JAMA Intern Med. 2013;173(8):702-704. doi:10.1001/2013. jamainternmed.61 7. Kroth PJ, Morioka-Douglas N, Veres S, et al. Association of electronic health record design and use factors with clinician stress and burnout. JAMA Netw Open. 2019;2(8):e199609. doi:10.1001/jamanetworkopen.2019.9609 8. Tutty MA, Carlasare LE, Lloyd S, Sinsky CA. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc. 2019:26(7):673-677. Published correction appears in J Am Med Inform Assoc. 2019:26(11):1424. doi:10.1093/jamia/ocz021 9. Adler-Milstein J, Zhao W, Willard-Grace R, Knox M, Grumbach K. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc. 2020;27(4):531-538. doi:10.1093/jamia/ocz220 10. Assis-Hassid S, Grosz BJ, Zimlichman E, Rozenblum R, Bates DW. Assessing EHR use during hospital morning rounds: a multi-faceted study. PLoS One. 2019;14(2):e0212816. doi:10.1371/journal.pone.0212816 11. Morris A. Computer applications. In: Hall JB, Schmidt GA, Wood LDH, eds. Principles of Critical Care.McGraw Hill Inc, Health Professions Division, PreTest Series; 1992:500-514. 12. Drew BJ, Harris P, Zègre-Hemsey JK, et al. Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PLoS One. 2014;9(10): e110274. doi:10.1371/journal.pone.0110274 13. Faiola A, Srinivas P, Duke J. Supporting clinical cognition: a human-centered approach to a novel ICU information visualization dashboard. AMIA Annu Symp Proc. 2015;2015:560-569. 14. Thimbleby H, Oladimeji P, Cairns P. Unreliable numbers: error and harm induced by bad design can be reduced by better design. J R Soc Interface. 2015;12(110):0685. doi:10.1098/rsif.2015.0685 15. Mack EH, Wheeler DS, Embi PJ. Clinical decision support systems in the pediatric intensive care unit. Pediatr Crit Care Med. 2009;10(1):23-28. doi:10.1097/PCC.0b013e3181936b23 16. Khairat S, Whitt S, Craven CK, Pak Y, Shyu CR, Gong Y. Investigating the impact of intensive care unit interruptions on patient safety events and electronic health records use: an observational study. J Patient Saf. 2019. doi:10.1097/PTS.0000000000000603 17. Committee on Patient Safety and Health Information Technology; Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. National Academies Press; 2011. 18. Khairat S, Coleman C, Ottmar P, Bice T, Koppel R, Carson SS. Physicians’ gender and their use of electronic health records: findings from a mixed-methods usability study. J Am Med Inform Assoc. 2019;26(12):1505-1514. doi:10.1093/jamia/ocz126 19. Asan O, Yang Y. Using eye trackers for usability evaluation of health information technology: a systematic literature review. JMIR Hum Factors. 2015;2(1):e5. doi:10.2196/humanfactors.4062 20. Lorigo L, et al. Eye tracking and online search: lessons learned and challenges ahead. J Am Soc Inf Sci Technol. 2008;59(7):1041-1052. doi:10.1002/asi.20794 21. Ehmke C, Wilson S. Identifying web usability problems from eye-tracking data. In: Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI…But Not as We Know It—Volume 1. University of Lancaster, United Kingdom: British Computer Society; 2007:119-128. 22. Alkan S, Cagiltay K. Studying computer game learning experience through eye tracking. Br J Educ Technol. 2007;38(3):538-542. doi:10.1111/j.1467-8535.2007.00721.x 23. Tourassi G, Voisin S, Paquit V, Krupinski E. Investigating the link between radiologists’ gaze, diagnostic decision, and image content. J Am Med Inform Assoc. 2013;20(6):1067-1075. doi:10.1136/amiajnl-2012-001503 24. Brown PJ, Marquard JL, Amster B, et al. What do physicians read (and ignore) in electronic progress notes? Appl Clin Inform. 2014;5(2):430-444. doi:10.4338/ACI-2014-01-RA-0003 JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 11/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency 25. Eghdam A, Forsman J, Falkenhav M, Lind M, Koch S. Combining usability testing with eye-tracking technology: evaluation of a visualization support for antibiotic use in intensive care. Stud Health Technol Inform. 2011;169: 945-949. doi:10.3233/978-1-60750-806-9-945 26. Merkle F, Kurtovic D, Starck C, Pawelke C, Gierig S, Falk V. Evaluation of attention, perception, and stress levels of clinical cardiovascular perfusionists during cardiac operations: a pilot study. Perfusion. 2019;34(7):544-551. doi: 10.1177/0267659119828563 27. van der Wel P, van Steenbergen H. Pupil dilation as an index of effort in cognitive control tasks: A review. Psychon Bull Rev. 2018;25(6):2005-2015. doi:10.3758/s13423-018-1432-y 28. de Rodez Benavent SA, Nygaard GO, Harbo HF, et al. Fatigue and cognition: pupillary responses to problem- solving in early multiple sclerosis patients. Brain Behav. 2017;7(7):e00717. doi:10.1002/brb3.717 29. Morad Y, Lemberg H, Yofe N, Dagan Y. Pupillography as an objective indicator of fatigue. Curr Eye Res. 2000; 21(1):535-542. doi:10.1076/0271-3683(200007)2111-ZFT535 30. Szabadi E. Functional neuroanatomy of the central noradrenergic system. J Psychopharmacol. 2013;27(8): 659-693. doi:10.1177/0269881113490326 31. Unsworth N, Robison MK, Miller AL. Individual differences in baseline oculometrics: examining variation in baseline pupil diameter, spontaneous eye blink rate, and fixation stability. Cogn Affect Behav Neurosci. 2019;19(4): 1074-1093. doi:10.3758/s13415-019-00709-z 32. Hopstaken JF, van der Linden D, Bakker AB, Kompier MA. A multifaceted investigation of the link between mental fatigue and task disengagement. Psychophysiology. 2015;52(3):305-315. doi:10.1111/psyp.12339 33. Hopstaken JF, van der Linden D, Bakker AB, Kompier MA. The window of my eyes: task disengagement and mental fatigue covary with pupil dynamics. Biol Psychol. 2015;110:100-106. doi:10.1016/j.biopsycho.2015.06.013 34. Peavler WS. Pupil size, information overload, and performance differences. Psychophysiology. 1974;11(5): 559-566. doi:10.1111/j.1469-8986.1974.tb01114.x 35. Khairat S, Coleman C, Newlin T, et al. A mixed-methods evaluation framework for electronic health records usability studies. J Biomed Inform. 2019;94:103175. doi:10.1016/j.jbi.2019.103175 36. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495-1499. doi:10.1016/j.ijsu.2014.07.013 37. Nielsen J, Landauer TK. A mathematical model of the finding of usability problems. Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems. Amsterdam, The Netherlands: ACM;1993:206-213. 38. US Department of Health and Human Services, Food and Drug Administration. Applying Human Factors and Usability Engineering to Medical Devices. Center for Devices and Radiological Health; 2016. 39. Mazur LM, Mosaly PR, Moore C, Marks L. Association of the usability of electronic health records with cognitive workload and performance levels among physicians. JAMA Netw Open. 2019;2(4):e191709. doi:10.1001/ jamanetworkopen.2019.1709 40. Jayachander D, Coleman C, Rand V, Newlin T, Khairat S. Novel eye-tracking methods to evaluate the usability of electronic health records. Stud Health Technol Inform. 2019;262:244-247. 41. Khairat S, Jayachander D, Coleman C, Newlin T, Rand V. Understanding the impact of clinical training on EHR use optimization. Stud Health Technol Inform. 2019;262:240-243. 42. Mazur LM, Mosaly PR, Moore C, et al. Toward a better understanding of task demands, workload, and performance during physician-computer interactions. J Am Med Inform Assoc. 2016;23(6):1113-1120. doi:10.1093/ jamia/ocw016 43. Yamada Y, Kobayashi M. Detecting mental fatigue from eye-tracking data gathered while watching video: evaluation in younger and older adults. Artif Intell Med. 2018;91:39-48. doi:10.1016/j.artmed.2018.06.005 44. Baker K, Olson J, Morisseau D. Work practices, fatigue, and nuclear power plant safety performance. Hum Factors. 1994;36(2):244-257. doi:10.1177/001872089403600206 45. McCormick F, Kadzielski J, Landrigan CP, Evans B, Herndon JH, Rubash HE. Surgeon fatigue: a prospective analysis of the incidence, risk, and intervals of predicted fatigue-related impairment in residents. Arch Surg. 2012; 147(5):430-435. doi:10.1001/archsurg.2012.84 46. Maslach C, Schaufeli WB, Leiter MP. Job burnout. Annu Rev Psychol. 2001;52:397-422. doi:10.1146/annurev. psych.52.1.397 JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 12/13 JAMA Network Open | Health Informatics Association of EHR Use With Physician Fatigue and Efficiency SUPPLEMENT. eFigure 1. Eye-Tracking Device Used in the Study (Tobii Pro 2 Glasses) eTable 1. Description and Categorization of Each of the Four Patient Cases as Reported by MICU Domain Expert eTable 2. Individual Fatigue Scores for Each Eye and Total Fatigue Score for Each Participant, Averaged Across all Four Simulation Patient Cases eFigure 2. Scatter Plots of Carryover Effect Between Cases 3 and 4 Including Outliers eTable 3. Analysis With and Without Outliers JAMA Network Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 (Reprinted) June 9, 2020 13/13 Supplementary Online Content Khairat S, Coleman C, Ottmar P, Jayachander DI, Bice T, Carson SS. Association of electronic health record use with physician fatigue and efficiency. JAMA Netw Open. 2020;3(6):e207385. doi:10.1001/jamanetworkopen.2020.7385 eFigure 1. Eye-Tracking Device Used in the Study (Tobii Pro 2 Glasses) eTable 1. Description and Categorization of Each of the Four Patient Cases as Reported by MICU Domain Expert eTable 2. Individual Fatigue Scores for Each Eye and Total Fatigue Score for Each Participant, Averaged Across all Four Simulation Patient Cases eFigure 2. Scatter Plots of Carryover Effect Between Cases 3 and 4 Including Outliers eTable 3. Analysis With and Without Outliers This supplementary material has been provided by the authors to give readers additional information about their work. © 2020 Khairat S et al. JAMA Network Open. eFigure 1. Eye-Tracking Device Used in the Study (Tobii Pro 2 Glasses) © 2020 Khairat S et al. JAMA Network Open. eTable 1. Description and Categorization of Each of the Four Patient Cases as Reported by MICU Domain Expert Case Description Interactive Questions and Tasks 1. Multi-system Organ Failure How many services have been consulted? 44 year-old female with multisystem organ failure and How many consult teams have seen the patient? undifferentiated shock. Have labs been ordered per ID consultant? Have labs been collected? Do current abx, as ordered, match plan in note? Pa rticipa nts review clinica l documenta tion, ma nage Correct order modification? medica tions, and respond to consulta tions 2. Acute Hypoxic Respiratory Failure 60 year-old female with acute hypoxic respiratory failure Was there a change in ventilator settings? due to pneumonia. What change(s) occurred? Why did change occur (clinical reason)? Are microbiology data available? Pa rticipa nts review clinica l documenta tion and Specific microbiology results? flowsheets, eva lua te cha nges in mechanica l ventila tion, a nd ana lyze microbiology da ta 3. Sepsis Explanation for duplicate labs? 25 year-old male with severe infection (sepsis) due to Abx received since yesterday? Abx currently ordered? skin/soft tissue wound of the leg. IV fluids received since yesterday? IVF administered = clinically appropriate? Pa rticipa nts a ssess flowsheets, la bora tory da ta , Are any labs currently ordered? a ntibiotics a nd fluid ma na gement Order additional lab tests 4. Volume Overload 56 year-old male trauma patient with postoperative heart Net fluid status since admission? failure and volume overload. Current weight? Last clinic weight? Manage IVF orders Pa rticipa nts identify weight trends during previous visits, ma na ge IV fluids a nd medica tions © 2020 Khairat S et al. JAMA Network Open. eTable 2. Individual Fatigue Scores for Each Eye and Total Fatigue Score for Each Participant, Averaged Across all Four Simulation Patient Cases Participant Right eye fatigue score Left eye fatigue score Total Fatigue Score P1 0.324 0.206 0.265 P2 0.351 -0.121 0.115 P3 -0.579 0.265 -0.157 P4 -0.517 -0.327 -0.422 P5 -0.089 -0.196 -0.1425 P6 -0.107 -0.281 -0.194 P7 -0.2 -0.341 -0.2705 P8 -0.057 -0.064 -0.0605 P9 -0.113 -0.345 -0.229 P10 -0.189 -0.313 -0.251 P11 -0.026 0.025 -0.0005 P12 0.111 0.06 0.0855 P13 -0.074 -0.08 -0.077 P14 -0.054 -0.069 -0.0615 P15 -0.2 -0.272 -0.236 P16 -0.145 -0.161 -0.153 P17 -0.828 -0.901 -0.8645 P18 -0.105 -0.069 -0.087 P19 -0.174 0.49 0.158 P20 0.324 0.308 0.316 P21 -0.146 -0.113 -0.1295 P22 -0.303 -0.31 -0.3065 P23 0.056 0.03 0.043 P24 -0.377 -0.358 -0.3675 P25 -0.256 -0.284 -0.27 © 2020 Khairat S et al. JAMA Network Open. eFigure 2. Scatter Plots of Carryover Effect Between Cases 3 and 4 Including Outliers © 2020 Khairat S et al. JAMA Network Open. © 2020 Khairat S et al. JAMA Network Open. © 2020 Khairat S et al. JAMA Network Open. eTable 3. Analysis With and Without Outliers Analysis With Outliers Case 1 Fatigue -> Case 2 Efficiency Pearson Correlation Coefficients, N = 25 Prob > |r| under H0: Rho=0 fatigue screens clicks minutes seconds fatigue 1.00000 -0.38060 -0.26020 0.15583 0.15583 0.0605 0.2091 0.4570 0.4570 screens -0.38060 1.00000 0.32554 -0.04419 -0.04419 0.0605 0.1123 0.8339 0.8339 clicks -0.26020 0.32554 1.00000 0.26871 0.26871 0.2091 0.1123 0.1940 0.1940 minutes 0.15583 -0.04419 0.26871 1.00000 1.00000 0.4570 0.8339 0.1940 <.0001 seconds 0.15583 -0.04419 0.26871 1.00000 1.00000 0.4570 0.8339 0.1940 <.0001 © 2020 Khairat S et al. JAMA Network Open. Case 2 Fatigue -> Case 3 Efficiency Pearson Correlation Coefficients, N = 25 Prob > |r| under H0: Rho=0 fatigue screens clicks minutes seconds fatigue 1.00000 -0.04383 -0.48106 -0.22906 -0.22906 0.8352 0.0149 0.2707 0.2707 screens -0.04383 1.00000 0.39073 -0.14633 -0.14633 0.8352 0.0535 0.4852 0.4852 clicks -0.48106 0.39073 1.00000 0.43487 0.43487 0.0149 0.0535 0.0298 0.0298 minutes -0.22906 -0.14633 0.43487 1.00000 1.00000 0.2707 0.4852 0.0298 <.0001 seconds -0.22906 -0.14633 0.43487 1.00000 1.00000 0.2707 0.4852 0.0298 <.0001 © 2020 Khairat S et al. JAMA Network Open. Case 3 Fatigue -> Case 4 Efficiency Pearson Correlation Coefficients, N = 25 Prob > |r| under H0: Rho=0 fatigue screens clicks minutes seconds fatigue 1.00000 -0.48640 -0.56229 -0.52154 -0.52154 0.0137 0.0034 0.0075 0.0075 screens -0.48640 1.00000 0.48932 0.54252 0.54252 0.0137 0.0130 0.0051 0.0051 clicks -0.56229 0.48932 1.00000 0.29902 0.29902 0.0034 0.0130 0.1465 0.1465 minutes -0.52154 0.54252 0.29902 1.00000 1.00000 0.0075 0.0051 0.1465 <.0001 seconds -0.52154 0.54252 0.29902 1.00000 1.00000 0.0075 0.0051 0.1465 <.0001 © 2020 Khairat S et al. JAMA Network Open. Analysis Without Outliers Case 1 Fatigue and Case 2 efficiency Pearson Correlation Coefficients, N = 24 Prob > |r| under H0: Rho=0 fatigue screens clicks minutes seconds fatigue 1.00000 -0.31252 -0.21873 0.26717 0.26717 0.1371 0.3045 0.2069 0.2069 screens -0.31252 1.00000 0.30315 -0.07976 -0.07976 0.1371 0.1499 0.7110 0.7110 clicks -0.21873 0.30315 1.00000 0.25359 0.25359 0.3045 0.1499 0.2318 0.2318 minutes 0.26717 -0.07976 0.25359 1.00000 1.00000 0.2069 0.7110 0.2318 <.0001 seconds 0.26717 -0.07976 0.25359 1.00000 1.00000 0.2069 0.7110 0.2318 <.0001 © 2020 Khairat S et al. JAMA Network Open. Case 2 fatigue and case 3 efficiency Pearson Correlation Coefficients, N = 23 Prob > |r| under H0: Rho=0 fatigue screens clicks minutes seconds fatigue 1.00000 -0.11835 -0.01230 0.58993 0.58993 0.5907 0.9556 0.0030 0.0030 screens -0.11835 1.00000 0.55494 -0.10859 -0.10859 0.5907 0.0060 0.6219 0.6219 clicks -0.01230 0.55494 1.00000 0.10587 0.10587 0.9556 0.0060 0.6307 0.6307 minutes 0.58993 -0.10859 0.10587 1.00000 1.00000 0.0030 0.6219 0.6307 <.0001 seconds 0.58993 -0.10859 0.10587 1.00000 1.00000 0.0030 0.6219 0.6307 <.0001 © 2020 Khairat S et al. JAMA Network Open. Case 3 fatigue and case 4 efficiency Pearson Correlation Coefficients, N = 24 Prob > |r| under H0: Rho=0 fatigue screens clicks minutes seconds fatigue 1.00000 -0.31057 -0.49279 0.07615 0.07615 0.1397 0.0144 0.7236 0.7236 screens -0.31057 1.00000 0.40835 0.40490 0.40490 0.1397 0.0476 0.0497 0.0497 clicks -0.49279 0.40835 1.00000 0.07497 0.07497 0.0144 0.0476 0.7277 0.7277 minutes 0.07615 0.40490 0.07497 1.00000 1.00000 0.7236 0.0497 0.7277 <.0001 seconds 0.07615 0.40490 0.07497 1.00000 1.00000 0.7236 0.0497 0.7277 <.0001 © 2020 Khairat S et al. JAMA Network Open.
JAMA Network Open – American Medical Association
Published: Jun 9, 2020