Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Use of a Structured Assessment to Measure Nontechnical Aspects of Otolaryngology Residency Training

Use of a Structured Assessment to Measure Nontechnical Aspects of Otolaryngology Residency Training An educational curriculum that produces competent clinicians is the cornerstone of all residency training programs. The development of this clinical expertise has many components. Some, such as knowledge acquisition, technical expertise, and experience, are more tangible and measurable. Others, such as clinical judgment and critical thinking, are more enigmatic and yet equally if not more important. The challenge of how to teach these less tangible attributes has confounded medical educators since Hippocrates and continues to do so. Lacking better definition, we as educators tend to rely on a “we know it when we see it” method of assessing our trainees’ progress in their acquisition. In general, there tends to be agreement among the faculty about resident expertise at the extremes: the residents we know will to do the right thing regardless of their level of training and the residents whose clinical decision-making we have difficulty trusting despite an adequate knowledge base. Sorting out the residents in between and defining the specific issues are that are impeding their progress along the clinical continuum is more difficult. To do better we need to be develop valid methods for measuring these “less tangible” qualities. With this in mind, Shin and colleagues1 from the Harvard Medical School program have over the past 6 years developed a measurement instrument (Clinical Practice Instrument [CPI]) designed to be used for trainee assessment in the context of a highly structured oral board format examination. The CPI is similar in concept to what is used more extensively in quality-of-life surveys. It contains 21 questions completed by the examiner, which have either Likert scale or dichotomous responses depending on the issue being measured. The instrument is designed to assess the domains of information gathering, knowledge base, systematic organization, integration of aggregate patient findings, clinical judgment, and understanding the individual elements required to execute a treatment plan. This instrument has been validated, in their earlier published work,2 to be reliable between different raters, internally consistent, responsive to change, and valid from a criterion and discrimination standpoint. Building on this previous validation work, the authors describe their experience using this instrument for repeated testing of the same residents over the course of their residency training. They found that not only were total scores significantly different among postgraduate year (PGY) levels of training (PGY 2 level vs PGY 4 or PGY 5 level) but also that domain scores related to information gathering and organizational skills were acquired earlier in training, whereas knowledge base and clinical judgment improved later in residency. Clearly, the concept of a structured assessment technique, such as the CPI, is an improvement over the current typical oral board format in which the criteria for assessment are less specific and rely more heavily on the examiner’s “gestalt” of the candidate. What is less clear is whether the questions asked in the CPI are truly able to assess intangibles, such as judgment and critical thinking, as factors independent of knowledge acquisition and overall experience. The findings of this article1 suggest this to be the case in the context of how the authors defined these domains relative to the CPI questions asked; however, do these domain scores really reflect these qualities in everyday practice? Acquiring (and teaching) the ability differentiate a sick patient who is getting better from one who seems well but is getting sicker, when to operate and, more important, when not to, when to treat and when to observe are extraordinarily complex issues. They are, however, at the core of the clinical expertise we strive to create. A continued focus on refining the questions in the assessment tool is needed. This will come through a broader, multi-institutional application of this tool to the assessment of different forms of real-time clinical interaction paired with more long-term follow-up of trainees as they go out into practice. Only by doing this will we better understand whether we are measuring the right things. In the interim, Shin and colleagues1 are to be congratulated for their sustained efforts, which have helped raise awareness and focus on these issues. Back to top Article Information Corresponding Author: James Cohen, MD, PhD, Department of Otolaryngology–Head and Neck Surgery, Oregon Health Sciences University, 3710 US Veterans Hospital Rd, PO Box 1034, Mail Code P3-OC, Portland, OR 97210 (James.Cohen2@va.gov). Published Online: February 25, 2016. doi:10.1001/jamaoto.2015.3910. Conflict of Interest Disclosures: None reported. References 1. Shin JJ, Cunningham MJ, Emerick KG, Gray ST. Measuring nontechnical aspects of surgical clinician development in an otolaryngology residency training program [published online February 25, 2016]. JAMA Otolaryngol Head Neck Surg. doi:10.1001/jamaoto.2015.3642.Google Scholar 2. Shin JJ, Page JH, Shapiro J, Gray ST, Cunningham MJ. Validation of a clinical practice ability instrument for surgical training. Otolaryngol Head Neck Surg. 2010;142(4):493-9.e1.PubMedGoogle ScholarCrossref http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JAMA Otolaryngology - Head & Neck Surgery American Medical Association

Use of a Structured Assessment to Measure Nontechnical Aspects of Otolaryngology Residency Training

JAMA Otolaryngology - Head & Neck Surgery , Volume 142 (5) – May 1, 2016

Use of a Structured Assessment to Measure Nontechnical Aspects of Otolaryngology Residency Training

Abstract

An educational curriculum that produces competent clinicians is the cornerstone of all residency training programs. The development of this clinical expertise has many components. Some, such as knowledge acquisition, technical expertise, and experience, are more tangible and measurable. Others, such as clinical judgment and critical thinking, are more enigmatic and yet equally if not more important. The challenge of how to teach these less tangible attributes has confounded medical educators...
Loading next page...
 
/lp/american-medical-association/use-of-a-structured-assessment-to-measure-nontechnical-aspects-of-0Z55D7UY3i
Publisher
American Medical Association
Copyright
Copyright © 2016 American Medical Association. All Rights Reserved.
ISSN
2168-6181
eISSN
2168-619X
DOI
10.1001/jamaoto.2015.3910
Publisher site
See Article on Publisher Site

Abstract

An educational curriculum that produces competent clinicians is the cornerstone of all residency training programs. The development of this clinical expertise has many components. Some, such as knowledge acquisition, technical expertise, and experience, are more tangible and measurable. Others, such as clinical judgment and critical thinking, are more enigmatic and yet equally if not more important. The challenge of how to teach these less tangible attributes has confounded medical educators since Hippocrates and continues to do so. Lacking better definition, we as educators tend to rely on a “we know it when we see it” method of assessing our trainees’ progress in their acquisition. In general, there tends to be agreement among the faculty about resident expertise at the extremes: the residents we know will to do the right thing regardless of their level of training and the residents whose clinical decision-making we have difficulty trusting despite an adequate knowledge base. Sorting out the residents in between and defining the specific issues are that are impeding their progress along the clinical continuum is more difficult. To do better we need to be develop valid methods for measuring these “less tangible” qualities. With this in mind, Shin and colleagues1 from the Harvard Medical School program have over the past 6 years developed a measurement instrument (Clinical Practice Instrument [CPI]) designed to be used for trainee assessment in the context of a highly structured oral board format examination. The CPI is similar in concept to what is used more extensively in quality-of-life surveys. It contains 21 questions completed by the examiner, which have either Likert scale or dichotomous responses depending on the issue being measured. The instrument is designed to assess the domains of information gathering, knowledge base, systematic organization, integration of aggregate patient findings, clinical judgment, and understanding the individual elements required to execute a treatment plan. This instrument has been validated, in their earlier published work,2 to be reliable between different raters, internally consistent, responsive to change, and valid from a criterion and discrimination standpoint. Building on this previous validation work, the authors describe their experience using this instrument for repeated testing of the same residents over the course of their residency training. They found that not only were total scores significantly different among postgraduate year (PGY) levels of training (PGY 2 level vs PGY 4 or PGY 5 level) but also that domain scores related to information gathering and organizational skills were acquired earlier in training, whereas knowledge base and clinical judgment improved later in residency. Clearly, the concept of a structured assessment technique, such as the CPI, is an improvement over the current typical oral board format in which the criteria for assessment are less specific and rely more heavily on the examiner’s “gestalt” of the candidate. What is less clear is whether the questions asked in the CPI are truly able to assess intangibles, such as judgment and critical thinking, as factors independent of knowledge acquisition and overall experience. The findings of this article1 suggest this to be the case in the context of how the authors defined these domains relative to the CPI questions asked; however, do these domain scores really reflect these qualities in everyday practice? Acquiring (and teaching) the ability differentiate a sick patient who is getting better from one who seems well but is getting sicker, when to operate and, more important, when not to, when to treat and when to observe are extraordinarily complex issues. They are, however, at the core of the clinical expertise we strive to create. A continued focus on refining the questions in the assessment tool is needed. This will come through a broader, multi-institutional application of this tool to the assessment of different forms of real-time clinical interaction paired with more long-term follow-up of trainees as they go out into practice. Only by doing this will we better understand whether we are measuring the right things. In the interim, Shin and colleagues1 are to be congratulated for their sustained efforts, which have helped raise awareness and focus on these issues. Back to top Article Information Corresponding Author: James Cohen, MD, PhD, Department of Otolaryngology–Head and Neck Surgery, Oregon Health Sciences University, 3710 US Veterans Hospital Rd, PO Box 1034, Mail Code P3-OC, Portland, OR 97210 (James.Cohen2@va.gov). Published Online: February 25, 2016. doi:10.1001/jamaoto.2015.3910. Conflict of Interest Disclosures: None reported. References 1. Shin JJ, Cunningham MJ, Emerick KG, Gray ST. Measuring nontechnical aspects of surgical clinician development in an otolaryngology residency training program [published online February 25, 2016]. JAMA Otolaryngol Head Neck Surg. doi:10.1001/jamaoto.2015.3642.Google Scholar 2. Shin JJ, Page JH, Shapiro J, Gray ST, Cunningham MJ. Validation of a clinical practice ability instrument for surgical training. Otolaryngol Head Neck Surg. 2010;142(4):493-9.e1.PubMedGoogle ScholarCrossref

Journal

JAMA Otolaryngology - Head & Neck SurgeryAmerican Medical Association

Published: May 1, 2016

Keywords: clinical competence,education, medical, graduate,internship and residency,judgment,otolaryngology,professional competence,task performance and analysis,health care decision making,academic training,training

References