Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Performance Validity, Neurocognitive Disorder, and Post-concussion Symptom Reporting in Service Members with a History of Mild Traumatic Brain Injury

Performance Validity, Neurocognitive Disorder, and Post-concussion Symptom Reporting in Service... Objective: To examine the influence of different performance validity test (PVT) cutoffs on neuropsychological performance, post- concussion symptoms, and rates of neurocognitive disorder and postconcussional syndrome following mild traumatic brain injury (MTBI) in active duty service members. Method: Participants were 164 service members (Age: M = 28.1 years [SD = 7.3]) evaluated on average 4.1 months (SD = 5.0) follow- ing injury. Participants were divided into three mutually exclusive groups using original and alternative cutoff scores on the Test of Memory Malingering (TOMM) and the Effort Index (EI) from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): (a) PVT-Pass, n = 85; (b) Alternative PVT-Fail, n = 53; and (c) Original PVT-Fail, n = 26. Participants also completed the Neurobehavioral Symptom Inventory. Results: The PVT-Pass group performed better on cognitive testing and reported fewer symptoms than the two PVT-Fail groups. The Original PVT-Fail group performed more poorly on cognitive testing and reported more symptoms than the Alternative PVT-Fail group. Both PVT-Fail groups were more likely to meet DSM-5 Category A criteria for mild and major neurocognitive disorder and symptom reporting criteria for post- concussional syndrome than the PVT-Pass group. When alternative PVT cutoffs were used instead of original PVT cutoffs, the number of partici- pants with valid data meeting cognitive testing criteria for neurocognitive disorder or postconcussional syndrome decreased dramatically. Conclusion: PVT performance is significantly and meaningfully related to overall neuropsychological outcome. By using only original cutoffs, clinicians and researchers may miss people with invalid performances. Keywords: Mild traumatic brain injury; Performance validity; Neurocognitive disorder; Postconcussional syndrome; Effort Introduction Traumatic brain injury (TBI) has been one of the most common injuries sustained by US military service members during the conflicts in Iraq and Afghanistan (Hoge et al., 2008; Terrio et al., 2009), with the majority of these injuries (i.e., 85%; © Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US. doi:10.1093/arclin/acx098 Advance Access publication on 21 October 2017 Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 607 MacGregor et al., 2010) being classified as mild. Typically, symptoms improve and resolve within days, weeks, or months following a mild TBI (MTBI); however, for a small minority of people, non-specific symptoms and problems may persist long term (Carroll et al., 2014). There are many reasons why a person might have chronic symptoms and problems following a MTBI, such as having co-occurring difficulties with traumatic stress (Boyle et al., 2014; Lagarde et al., 2014) and depression (Iverson et al., 2017; Solomon, Kuhn, & Zuckerman, 2016), or involvement in compensation claims, such as worker’s compensation, per- sonal injury litigation, or disability determinations (Binder & Rohling, 1996; Iverson, Zasler, & Lange, 2006). For people who do not recover in a timely manner, a comprehensive neuropsychological evaluation is recommended (McCrea et al., 2008). Assessment of performance validity (Larrabee, 2012) is considered standard practice when conducting neuropsychological evaluations of individuals who are involved in litigation or compensation-related evaluations. Formal evaluation of perfor- mance validity is considered an integral component of all neuropsychological evaluations and has been strongly supported in position papers by two prominent neuropsychological societies (Bush et al., 2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009). Additionally, the Military TBI Task Force (McCrea et al., 2008) recommended that all neuropsychological assessment methods include performance validity tests (PVTs). It has been estimated that 17% (Whitney, Shepard, Williams, Davis, & Adams, 2009) to 58% (Armistead-Jehle, 2010) of veterans and service members presenting for neuropsychological evaluation fail PVTs. Typically, neuropsychological assessment in service members contributes to decision-making regarding whether they are fit for duty, are deployable, should be restricted from hazardous duty, and/or require a medical board evalua- tion or disability pension. These unique opportunities for secondary gain may explain the high rate of PVT failure in military populations. Indeed, rates of PVT failure vary significantly by evaluation context (Armistead-Jehle & Buican, 2012; Nelson et al., 2010; McCormick, Yoash-Gantz, McDonald, Campbell, & Tupler, 2013). There is substantial evidence that failure of PVTs is associated with low neuropsychological test scores (Constantinou, Bauer, Ashendorf, Fisher, & McCaffrey, 2005; Green, 2007; Green, Rohling, Lees-Haley, & Allen, 2001; Iverson, 2006; Stevens, Friedel, Mehren, & Merten, 2008; Lange, Pancholi, Bhagwat, Anderson-Barnes, & French, 2012). PVT failure has also been associated with worse self-reported general functioning (Lippa, Pastorek, et al., 2014), greater cognitive complaints (Armistead-Jehle, Gervais, & Green, 2012), and greater post-concussive symptoms (Iverson, Lange, Brooks, & Rennison, 2010; Kirkwood, Peterson, Connery, Baker, & Grubenhoff, 2014; Lange, Iverson, Brooks, & Rennison, 2010; Tsanadis et al., 2008). Therefore, failure of PVTs is associated with greater symptoms, worse cognitive functioning, and greater disability— but the accuracy of those clinical outcome measures is uncertain. One of the most widely used PVTs (Sharland & Gfeller, 2007) is the Test of Memory Malingering (TOMM; Tombaugh, 1996). The TOMM manual specifies cutoffs for identifying response bias. However, more recent studies have suggested the use of higher cutoff scores on the TOMM in MTBI patients can increase its sensitivity to detect invalid performance while maintaining acceptable specificity (e.g., ≤48 on Trial 2; Greve, Bianchini, & Doane, 2006; Jones, 2013; Stenclik, Miele, Silk-Eglit, Lynch, & McCaffrey, 2013). Moreover, several studies have used TOMM Trial 1 as an indicator of invalid performance in MTBI patients, with cutoff scores ranging from ≤38 to ≤42 (Bauer, O’Bryant, Lynch, McCaffrey, & Fisher, 2007; Denning, 2012; Greve et al., 2006; Jones, 2013; Stenclik et al., 2013). Denning (2012) provides a summary of multiple studies that have investigated TOMM Trial 1 cutoffs, the weighted aver- age of which was a cutoff of ≤40. Administering formal stand-alone PVTs, such as the TOMM, typically takes 10–20 min (Strauss, Sherman, & Spreen, 2006), which may reduce the amount of time available for administration of other neuropsychological tests. As such, there has been a recent focus on the development and validation of PVTs that are embedded into existing cognitive tests. One such embedded PVT is the Effort Index (EI; Silverberg, Wertheimer, & Fichtenberg, 2007) from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS; Randolph, Tierney, Mohr, & Chase, 1998). The RBANS EI has been used in a number of studies with service members and veterans (Jones, 2016; Lippa, Lange, Bhagwat, & French, 2017; Paulson, Horner, & Bachman, 2015; Zimmer, Heyanka, & Proctor-Weber, 2017); individuals with Parkinson’s disease (Carter, Scott, Adams, & Linck, 2016), schizophrenia (Moore et al., 2013; Morra, Gold, Sullivan, & Strauss, 2015), Huntington’s disease (Sieck, Smith, Duff, Paulsen, & Beglinger, 2013), and dementia (Dunham, Shadi, Sofko, Denney, & Calloway, 2014; Morra et al., 2015); and in those involved in compensation-seeking disability evaluations (Crighton, Wygant, Holt, & Granacher, 2015). The EI is calculated by creating a weighted composite score from performance on the Digit Span and List Recognition subtests. In their original study, Silverberg et al. (2007) concluded that an EI score >3 re- flects invalid performance because it was rarely found in the general population. The authors also concluded that an EI score ≥1 is most sensitive with patients with post-acute MTBI when the goal is to determine if the performance is invalid. Other studies have supported the cutoff of >3 in geriatric (Barker, Horner, & Bachman, 2010; Hook, Marquine, & Hoelzle, 2009) and veteran samples (Barker et al., 2010; Young, Baughman, & Roper, 2012). An investigation of the two suggested cutoffs in an MTBI sample revealed that the ≥1 cutoff classified MTBI participants more accurately than the >3 cutoff (Armistead- Jehle & Hansen, 2011). Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 608 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 Although alternative PVT cutoffs have been supported for use with MTBI patients in the literature, these cutoffs do not yet appear to be widely adopted for use with this population. We have observed that many clinical neuropsychologists do not apply the newer, more sensitive, and slightly less specific PVT cutoffs with their MTBI patients. Recent MTBI research stud- ies continue to define TOMM failure based on original, rather than alternative, cutoffs (Clark et al., 2016; Proto et al., 2014; Verfaellie, Lafleche, Spiro, & Bousquet, 2014). This might be due in part to the emphasis on prioritizing specificity and posi- tive predictive power over sensitivity and negative predictive power when it comes to the use of PVTs (Greve & Bianchini, 2004, 2007). Given the discrepancy between research supporting alternative PVT cutoffs following MTBI, and the continued use of original PVT cutoffs, the present study examined the influence of using original versus alternative PVT cutoffs in clinical decision-making. It was hypothesized that: (a) participants evaluated following an MTBI who failed PVTs based on original cutoffs would be most likely to meet cognitive testing criteria for Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) mild and major neurocognitive disorder and symptom reporting criteria for the International Classification of Diseases 10th edition (ICD-10) postconcussional syndrome (PCS); (b) participants who passed PVTs would be least likely to meet criteria for these diagnoses; and (c) participants who failed PVTs based on alternative cutoffs, but not based on origi- nal cutoffs, would meet criteria for these diagnoses more frequently than participants who passed PVTs, but less frequently than participants who failed PVTs based on original cutoffs. Methods Participants Participants were 164 U.S. military service members who sustained an MTBI and were evaluated at the Walter Reed Army Medical Center (WRAMC), Washington, DC. Participants were selected from a larger sample of 313 military service mem- bers diagnosed with MTBI who were evaluated at WRAMC between November 2003 and January 2011 following a sus- pected or confirmed TBI, and who had agreed to the use of their clinical data for research purposes. The protocols under which these data were collected were approved by the Institutional Review Board of WRAMC. This study was completed in accordance with the guidelines of the Declaration of Helsinki. All patients had been referred to the neuropsychology TBI service at WRAMC for consultation as a result of self-reported or clinician reported concern about their cognitive functioning. Patients were included in the original sample if they: (a) had sustained an MTBI, (b) were administered the RBANS, and (c) had not been administered the RBANS previously by the neu- ropsychology TBI service. Participants were excluded from the final sample if they were more than 2 years post-injury (n = 22), were missing time since injury data (n = 3), did not complete or had an incomplete Neurobehavioral Symptom Inventory (NSI; Cicerone & Karlmar, 1995, n = 23), had incomplete RBANS data (n = 6), were not administered the TOMM (n = 53), or scored 45–48 on TOMM Trial 1 and were not administered TOMM Trial 2 (n = 46). Participants were administered the RBANS, TOMM, and the NSI for the purposes of providing a brief screen of cognitive functioning and post-concussive symptoms. For the participants who scored 49 or 50 on TOMM Trial 1 (n = 79), TOMM Trial 2 was not administered, under the assumption that participants would score at least as well on Trial 2 as they did on Trial 1. Of 453 patients who scored 49–50 on TOMM Trial 1, only 2 (0.4%) scored 48 and none scored below 48 on Trial 2 (Mossman, Wygant, Gervais, & Hart, 2017). No participants in the current study were administered the TOMM retention trial. Mild TBI was defined as a period of self-reported loss of consciousness (LOC) ≤15 min, posttraumatic amnesia ≤24 h, and/or altered mental status ≤24 h following a credible injury mechanism. Although we would have preferred to use an LOC criterion of <30 min, consistent with commonly used diagnostic criteria (Carroll, Cassidy, Holm, Kraus, & Coronado, 2004; Managment of Concussion/mTBI Working Group, 2009; Mild Traumatic Brain Injury Committee, American Congress of Rehabilitation Medicine, & Head Injury Interdisciplinary Special Interest Group, 1993), the available information regarding LOC was limited to categorical data that did not allow us to differentiate between duration of LOC greater than or less than 30 min (i.e., available data = LOC < 15 min and LOC 16–60 min). Glasgow Coma Scale scores were not available for most participants. Participants were divided into three mutually exclusive groups based on their performance on the TOMM and RBANS EI. Participants were included in the Original PVT-Fail group if they failed either the RBANS EI with a score >3(Barker et al., 2010; Hook et al., 2009; Young et al., 2012) or if TOMM Trial 2 was below the cutoff specified in the manual. They were included in the Alternative PVT-Fail group if they did not meet criteria for the Original PVT-Fail group and scored 1–3on the RBANS EI (Armistead-Jehle & Hansen, 2011), ≤40 on TOMM Trial 1 (Denning, 2012), or 45–48 on TOMM Trial 2 Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 609 (Greve et al., 2006; Jones, 2013). Participants were included in the PVT-Pass group if they scored >40 on TOMM Trial 1, >48 on TOMM Trial 2 (if administered), and 0 on the RBANS EI. A total of 26 participants were included in the Original PVT-Fail group. Of these, 24 failed TOMM Trial 2 with a score <45 and 8 participants failed the RBANS EI with a score >3. A total of 53 participants were in the Alternative PVT-Fail group. Of these, 26 failed TOMM Trial 1 with a score ≤40, 22 failed TOMM Trial 2 with a score of 45–48, and 33 failed the RBANS EI with a score of 1–3. A total of 85 participants were in the PVT-Pass group. Measures and Procedure Neurocognitive Assessment and DSM-5 Category A Criteria for Mild and Major Neurocognitive Disorder The RBANS (Randolph et al., 1998) is a battery of 12 subtests assessing 5 cognitive domains, including Immediate Memory/Learning, Visuospatial/Constructional, Language, Attention, and Delayed Memory. The RBANS was originally developed to screen for dementia, but has also been validated for use in patients following TBI (Lippa, Hawes, Jokic, & Caroselli, 2013; McKay, Casey, Wertheimer, & Fichtenberg, 2007; McKay, Wertheimer, Fichtenberg, & Casey, 2008; Pachet, 2007). The DSM-5 Category A criteria for mild neurocognitive disorder indicates that a patient demonstrates perfor- mance ≥1SD below expectations in at least one cognitive domain (American Psychiatric Association, 2013). The DSM-5 Category A criteria for major neurocognitive disorder indicates that the patient demonstrates performance ≥2 SD below ex- pectations in at least one cognitive domain. Importantly, a clinical diagnosis of mild or major neurocognitive disorder requires clinical judgment, and in the case of major neurocognitive disorder, evidence of reduced independence in daily activities. The present study defined mild neurocognitive disorder as present when at least two scores in a single cognitive domain were ≥1 SD below the mean. The cognitive domains of Immediate Memory, Visuospatial Functioning, Language, and Attention each consisted of only two subtests and therefore performance must have been ≥1 SD below the mean on both subtests in at least one domain for the participant to meet DSM-5 Category A criteria for mild neurocognitive disorder. For the Delayed Memory domain, participants must have been ≥1 SD below the mean on at least two of the following subtest: List Recall, Story Recall, or Figure Recall. Major neurocognitive disorder was defined as present when at least two performances in a sin- gle cognitive domain were ≥2 SD below the mean of the normative sample. We required two scores below the respective cut- offs because obtaining a single low score is fairly common in healthy adults (see Brooks, Iverson, and Holdnack (2013) for a detailed discussion). Post-Concussive Symptoms and Postconcussional Syndrome The NSI (Cicerone & Karlmar, 1995) is a 22-item questionnaire assessing post-concussive symptoms. Participants are asked to rate symptoms on a 5-point Likert scale ranging from 0 (none) to 4 (very severe). Each of the NSI symptoms was analyzed individually and a total score consisting of all 22 items was also computed. Participants were classified into groups reflecting whether they endorsed symptoms reflective of PCS based on ICD-10 cri- teria. The ICD-10 criteria for PCS require a person to have a history of ‘head trauma with a loss of consciousness’ preceding the onset of symptoms by a period of up to 4 weeks and have symptoms in at least three of six categories: physical symptoms, emotional symptoms, cognitive symptoms, insomnia, reduced tolerance to alcohol, and preoccupation with the aforemen- tioned symptoms and fear of permanent brain damage (World Health Organization, 1992). For the present study, we did not require prior injuries to have a witnessed loss of consciousness. Additionally, because the NSI contains symptoms covering the physical, emotional, cognitive, and insomnia domains, but not the alcohol tolerance and preoccupation with symptoms/ brain damage domains, only the first four domains were evaluated. It is important to acknowledge that symptoms endorsed on the NSI should be evaluated clinically. Clinical diagnosis of PCS is multifaceted and requires a comprehensive evaluation (see Iverson and colleagues (2013) for a comprehensive discussion). For the purposes of this study, however, participants were classified as meeting symptom criteria for PCS if they endorsed at least one symptom in at least three of the four physi- cal, emotional, cognitive, and insomnia domains as moderate (i.e., a score of 2 on a scale of 0–4) or greater on the NSI. Otherwise, participants’ responses on the NSI were classified ‘within normal limits.’ Statistical Analyses Chi-square analysis and ANOVAs were used to compare the three participant groups in terms of demographic variables, self-reported post-concussive symptoms, cognitive test scores, and the base rates of meeting DSM-5 Category A criteria for Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 610 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 mild and major neurocognitive disorder and ICD-10 criteria for PCS. SPSS (version 20) was used for all analyses. All analy- ses were conducted with p value <.01 to minimize Type I error. Results Demographics and Injury Characteristics The majority of participants (95.1%) were men. Participants were an average of 28.1 (SD = 7.3) years old with an average of 13.0 (SD = 1.9) years of education. Participants identified themselves as Caucasian (76.2%), Hispanic (13.4%), African American (4.9%), or another race (3.0%; 2.4% did not report a race/ethnicity). Participants were assessed an average of 4.1 months (SD = 5.0; Range = 0–24 months) following injury. There were no significant main effects for age, education, gender, or ethnicity across the three PVT groups. Time since injury did significantly differ across the groups, F(1,161) = 3.306, p = .039. Fisher’sLSD post hoc comparisons indicated that the Original PVT-Fail group (M = 6.42, SD = 4.99) was significantly further from injury than the Alternative PVT-Fail group (M = 3.70, SD = 4.38, p = .022) and the PVT-Pass group (M = 3.73, SD = 4.86, p = .016). Time since injury did not differ significantly between the PVT-Pass and Alternative PVT-Fail groups (p = .971). Neurocognitive Performance and Classification of Mild and Major Neurocognitive Disorder Descriptive statistics, group comparisons, and effect sizes (Cohen, 1988) for the RBANS subtests and index scores by PVT group are presented in Table 1. Data for the majority of the RBANS indices and individual subtests were not normally distributed; therefore, pair-wise comparisons using Mann–Whitney U tests were used. They indicated that both the Original PVT-Fail and Alternative PVT-Fail groups performed significantly worse on all RBANS indices compared to the PVT-Pass group. Participants in the Original PVT-Fail group performed worse than the Alternative PVT-Fail group on the Total Scale, Immediate Memory, and Delayed Memory Indices. Compared to the PVT-Pass group, the Original PVT-Fail group performed significantly worse on 11 of 12 individual subtests, with Figure Copy being the only exception. Participants in the Alternative PVT-Fail group performed significantly worse than the PVT-Pass group on 8 of 12 individual RBANS subtests (i.e., List Learning, Line Orientation, Semantic Fluency, Digit Span, Coding, List Recall, List Recognition, and Figure Recall). The percentage of participants that met DSM-5 Category A criteria for mild and major neurocognitive disorder, as well as the percentage of participants scoring one and two standard deviations below the mean of the normative sample on each of the RBANS indices, are presented in Table 2. Chi-square tests with Yates Continuity Correction indicated that Table 1. Descriptive statistics of RBANS index standard scores and subtest z scores by group 1. PVT-Pass 2. Alternative 3. Original d d d MWU group 1–3 1–2 2–3 (n = 85) PVT-Fail PVT-Fail differences at (n = 53) (n = 26) p < .01 M SD M SD M SD Immediate memory index 100.7 14.1 90.1 14.5 73.7 12.4 1.97 0.74 1.19 1 > 2 > 3 List learning 0.2 0.8 −0.4 1.0 −1.8 1.0 2.22 0.69 1.31 1 > 2 > 3 Story memory 0.1 1.1 −0.4 1.1 −1.6 1.3 1.51 0.44 1.01 1&2 > 3 Visuospatial/constructional Index 93.7 15.1 83.4 14.3 75.9 15.0 1.17 0.69 0.51 1 > 2&3 Figure copy −1.2 1.8 −1.8 1.9 −2.5 2.8 0.64 0.35 0.31 – Line orientation 0.5 0.7 0.2 0.7 −0.9 1.6 1.59 0.43 1.12 1 > 2 > 3 Language index 92.0 12.3 81.3 15.4 77.1 15.4 1.15 0.79 0.27 1 > 2&3 Picture naming 0.1 0.8 −0.5 1.6 −0.8 1.4 0.95 0.53 0.21 1 > 3 Semantic fluency −0.6 1.2 −1.4 1.2 −1.9 1.2 1.03 0.63 0.40 1 > 2&3 Attention index 91.5 14.2 73.9 17.5 65.3 19.4 1.69 1.14 0.47 1 > 2&3 Digit span −0.1 1.0 −1.0 1.1 −1.3 1.2 1.15 0.88 0.25 1 > 2&3 Coding −0.7 1.0 −1.5 1.2 −2.6 1.7 1.61 0.74 0.78 1 > 2 > 3 Delayed memory index 97.8 7.6 79.5 17.8 55.4 12.7 4.81 1.59 1.50 1 > 2 > 3 List recall 0.3 0.9 −0.7 1.1 −1.9 1.1 2.30 0.97 1.07 1 > 2 > 3 List recognition 0.0 0.7 −1.8 2.2 −5.8 3.4 4.31 1.38 1.52 1 > 2 > 3 Story recall −0.1 1.0 −0.6 1.1 −1.8 1.2 1.69 0.50 1.05 1&2 > 3 Figure recall −0.1 0.8 −1.1 1.3 −2.0 1.0 2.12 0.94 0.77 1 > 2 > 3 Total scale index 92.9 10.1 76.4 12.3 62.8 10.6 2.94 1.51 1.16 1 > 2 > 3 Note: PVT = performance validity test; MWU = Mann–Whitney U test. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 611 participants in the PVT-Pass group were significantly less likely than participants in the Alternative PVT-Fail group to meet cognitive testing criteria for mild neurocognitive disorder. (χ (1, n = 138) = 22.38, p < .001) or major neurocognitive disor- der (χ (1, n = 138) = 15.20, p < .001). Participants in the PVT-Pass group were also significantly less likely than partici- pants in the Original PVT-Fail group to meet cognitive testing criteria for mild neurocognitive disorder (χ (1, n = 111) = 30.52, p < .001) or major neurocognitive disorder (χ (1, n = 111) = 47.07, p < .001). The majority (84.6%) of the partici- pants in the Original PVT-Fail group, 64.2% of participants in the Alternative PVT-Fail group, and 22.4% of participants in the PVT-Pass group met DSM-5 Category A criteria for mild neurocognitive disorder. Similarly, 57.7% of the Original PVT- Fail group and 22.6% of the Alternative PVT-Fail group, but only 1.2% of participants in the PVT-Pass group met DSM-5 Category A criteria for major neurocognitive disorder. Additional analyses were conducted to evaluate how the proportion of participants meeting Category A criteria for mild and major neurocognitive disorder changed when alternative cutoffs were used instead of original cutoffs. As seen in Fig. 1, when alternative cutoffs were used instead of original cutoffs, 34 fewer participants met DSM-5 Category A criteria for mild neurocognitive disorder, changing the percentage of participants meeting cognitive testing criteria for mild neurocognitive dis- order from 32.3% to 11.6% and increasing the percentage of participants who failed PVTs from 15.9% to 48.2%. Table 2. Rates of meeting DSM-5 Category A criteria for mild and major neurocognitive disorder and PCS and rates of consistent performance ≥1SD or ≥2 SDs below the standardization sample mean on each cognitive domain 1. PVT-Pass % 2. Alternative 3. Original Chi-square (df = 2) Pair-wise PVT-Fail % PVT-Fail % Comparisons p < .01 Base rates of meeting cognitive criteria for neurocognitive disorder Met cognitive criteria for mild neurocognitive disorder 22.4 64.2 84.6 41.81 1 < 2&3 Met cognitive criteria for major neurocognitive disorder 1.2 22.6 57.7 46.63 1 < 2 < 3 Base rates of scores <1 SD below the mean Immediate memory (2/2 scores) 5.9 18.9 65.4 44.91 1&2 < 3 Visuospatial (2/2 scores) 3.5 1.9 19.2 11.42 — Language (2/2 scores) 2.4 9.4 19.2 8.88 1 < 3 Attention (2/2 scores) 10.6 39.6 57.7 28.03 1 < 2&3 Delayed memory (2/3 scores) 5.9 41.5 76.9 55.47 1 < 2 < 3 Base rates of scores <2 SD below the mean Immediate memory (2/2 scores) 0.0 3.8 30.8 33.66 1&2 < 3 Visuospatial (2/2 scores) 0.0 1.9 15.4 16.3 1 < 3 Language (2/2 scores) 0.0 3.8 15.4 13.37 1 < 3 Attention (2/2 scores) 0.0 7.5 26.9 23.15 1 < 3 Delayed memory (2/3 scores) 1.2 9.4 50.0 46.69 1&2 < 3 Note: PVT = performance validity test; PCS = postconcussional syndrome. Traditional Cutoffs Alternative Cutoffs Fail PVTs No Mild NCD Mild NCD No Major NCD Major NCD Fig. 1. Number and percentage of participants failing PVTs, meeting DSM-5 Category A criteria for minor neurocognitive disorder, or not meeting DSM-5 Category A criteria for minor neurocognitive disorder when different PVT cutoffs are used. Number of Participants Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 612 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 As shown in Table 2, the Original PVT-Fail group was more likely than the PVT-Pass group to perform ≥2 SD below the standardization sample mean in each of the five cognitive domains. The Original PVT-Fail group was also more likely than the PVT-Pass group to perform ≥1 SD below the standardization sample mean in all cognitive domains except for Visuospatial Functioning. The Alternative PVT-Fail group was more likely than the PVT-Pass group to perform ≥1 SD below the standardization mean in the domains of Attention and Delayed Memory. Overall, the memory indices were most strongly related to PVT performance. No participants in the PVT-Pass group performed ≥2 SDs below the mean of the normative sam- ple on the Immediate Memory Index, while 3.8% of participants in the Alternative PVT-Fail group and 30.8% of participants in the Original PVT-Fail group scored ≥2 SDs below the mean of the normative sample. On the Delayed Memory Index, 5.9% of participants in the PVT-Pass group, 41.5% of participants in the Alternative PVT-Fail group, and 76.9% of partici- pants in the Original PVT-Fail group scored ≥1 SD below the standardization sample mean. Similarly, 1.2% of participants in the PVT-Pass group, 9.4% of participants in the Alternative PVT-Fail group, and 50.0% of participants in the Original PVT- Fail group scored ≥2 SD below the standardization sample mean on the Delayed Memory Index. The number of cognitive domains with consistent low scores is displayed in Figs. 2 and 3. Only 1.2% of participants in the PVT-Pass scored ≥1 SD below the standardization sample mean in three or more cognitive domains, whereas 11.3% of parti- cipants in the Alternative PVT-Fail group and 46.1% of participants in the Original PVT-Fail group had scores that consis- tently were ≥1 SD below the mean in three or more cognitive domains. Only 1.2% of participants in the PVT-Pass group scored ≥1 SD below the mean in one or more cognitive domains, compared to 22.7% of participants in the Alternative PVT- Fail group and 57.6% of participants in the Original PVT-Fail group. Post-concussion Symptoms Descriptive statistics, group comparisons, and Cohen’s effect sizes (Cohen, 1988) for the 22 individual NSI symptoms and total score, by PVT group, are presented in Table 3. Participants in both PVT-Fail groups had higher NSI total scores com- pared to those in the PVT-Pass group. Participants in the Original PVT-Fail group reported significantly greater symptom severity than participants in the PVT-Pass group for 20 of the 22 individual items, with the largest effect sizes found with symptoms relating to headaches, sensitivity to light, and depression (all ds ≥1.21). Symptom endorsement rates were further compared with ICD-10 symptom reporting criteria for PCS. Chi-square analyses revealed significant differences between the groups; 65.4% of participants in the Original PVT-Fail group, 43.4% of partici- pants in the Alternative PVT-Fail group, and 21.7% of participants in the PVT-Pass group met ICD-10 PCS criteria based on symptoms endorsed as moderate or greater [χ (3, n = 162) = 18.42, p < .001]. Participants in the PVT-Pass group were 77.6 PVT-Pass Alternative PVT-Fail Original PVT-Fail 35.8 28.3 26.9 23.1 24.5 17.6 15.4 11.5 11.5 11.3 11.5 3.5 1.2 0.0 0.0 01 23 45 Number of Neurocognitive Domains Consistently ≥1 SD Below the Mean Fig. 2. Percentage of Participants with 0–5 cognitive domain scores ≥1 SD below the mean of the normative sample. Percentage of Participants Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 613 98.8 PVT-Pass Alternative PVT-Fail 77.4 Original PVT-Fail 42.3 18.9 19.2 15.4 11.5 7.7 3.8 3.8 1.2 0.0 0.0 0.0 0.0 01 23 45 Number of Neurocognitive Domains Consistently ≥2 SD Below the Mean Fig. 3. Percentage of Participants with 0–5 cognitive domain scores ≥2 SD below the mean of the normative sample. Table 3. Descriptive statistics for the individual Neurobehavioral Symptom Inventory symptoms by group 1. PVT-Pass 2. Alternative 3. Original d d d MWU group 1–3 1–2 2–3 (n = 85) PVT-Fail PVT-Fail differences at (n = 53) (n = 26) p < .01 M SD M SD M SD 1. Feeling dizzy 0.8 0.9 1.2 1.0 1.5 0.9 0.81 0.45 0.31 1 < 3 2. Loss of balance 0.8 0.9 1.1 0.9 1.5 1.0 0.72 0.33 0.38 1 < 3 3. Poor coordination, clumsy 0.7 0.8 1.1 0.9 1.4 1.0 0.85 0.44 0.39 1 < 3 4. Headaches 1.4 1.2 2.1 1.2 2.7 0.9 1.21 0.58 0.56 1 < 2&3 5. Nausea 0.5 0.8 0.7 1.0 1.3 1.1 0.84 0.22 0.55 1 < 3 6. Vision problems 1.0 1.1 0.9 0.9 1.7 1.3 0.60 0.03 0.71 — 7. Sensitivity to light 0.9 0.9 1.3 1.1 2.2 1.0 1.34 0.40 0.80 1&2 < 3 8. Hearing difficulty 1.0 1.0 1.1 1.0 1.6 1.1 0.60 0.12 0.48 1 < 3 9. Sensitivity to noise 1.0 1.1 1.3 1.2 1.7 1.2 0.66 0.26 0.35 1 < 3 10. Numbness or tingling 1.0 1.0 1.1 1.0 1.1 1.0 0.08 0.06 0.02 — 11. Change in taste and/or smell 0.3 0.7 0.5 0.7 1.0 1.3 0.77 0.20 0.53 1 < 3 12. Loss/increased appetite 0.6 1.0 1.2 1.0 1.8 1.4 1.10 0.56 0.54 1 < 2&3 13. Poor concentration 1.4 1.1 2.0 1.0 2.3 1.2 0.82 0.51 0.36 1 < 2&3 14. Forgetfulness/memory problems 1.6 1.2 2.2 1.0 2.8 0.9 1.11 0.59 0.59 1 < 2&3 15. Difficulty making decisions 0.8 0.9 1.3 1.1 1.8 1.3 0.99 0.51 0.45 1 < 2&3 16. Slowed thinking/getting organized 1.0 1.0 1.7 1.1 1.8 1.1 0.76 0.65 0.10 1 < 2&3 17. Fatigue/loss of energy/tires easily 1.0 1.0 1.7 1.3 2.0 1.1 1.01 0.64 0.28 1 < 2&3 18. Difficulty falling or staying asleep 1.9 1.3 2.2 1.3 2.9 1.3 0.75 0.22 0.53 1 < 3 19. Feeling anxious or tense 1.14 1.09 1.60 1.26 2.19 1.50 0.89 0.40 0.44 1 < 3 20. Feeling depressed or sad 0.61 0.85 1.08 1.16 1.81 1.39 1.23 0.48 0.59 1 < 3 21. Irritability, easily annoyed 1.35 1.14 1.79 1.18 2.65 1.20 1.13 0.38 0.73 1&2 < 3 22. Poor frustration tolerance/overwhelmed 1.00 1.06 1.53 1.12 2.19 1.50 1.03 0.49 0.53 1 < 2&3 Total Score 21.79 14.83 30.60 15.86 42.17 16.81 1.33 0.58 0.71 1 < 2&3 Note: PVT = performance validity test; MWU = Mann–Whitney U test. Percemtage of Participants Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 614 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 significantly less likely than participants in the Alternative PVT-Fail group and Original PVT-Fail group to meet ICD-10 symptom reporting criteria for PCS. When using original PVT criteria, 41 of the 136 participants who passed PVTs met symptom reporting criteria for PCS; however, when alternative PVT criteria were used, only 18 participants who passed PVTs met symptom reporting criteria for PCS. Discussion The current study illustrated a statistically significant and clinically meaningful relationship between PVT performance and the overall outcome from a neuropsychological evaluation. Participants who failed PVTs were more likely to meet ICD-10 symptom reporting criteria for PCS and DSM-5 Category A criteria for mild and major neurocognitive disorder. Participants who failed PVTs performed worse on most cognitive measures and reported worse PCS symptoms than participants who passed PVTs. The current study also found a PVT failure rate of 15.9% when using original cutoff scores; however, when using alternative cutoffs that have been validated in MTBI samples, the failure rate more than tripled to 48.2%. Additionally, when using alternative PVT cutoffs rather than original PVT cutoffs, the number of patients who passed PVTs and met DSM- 5 Category A criteria for mild neurocognitive disorder dropped from 53 (32.3%) to 19 (11.6%) and the number of participants who passed PVTs and met cognitive testing criteria for major neurocognitive disorder dropped from 13 (7.9%) to 1 (0.6%). Use of alternative cutoffs also decreased the number of participants who passed PVTs and met criteria for PCS from 41 (24.4%) to 18 (10.7%). Participants in both PVT failure groups performed significantly worse on cognitive measures and reported more severe post-concussion symptoms than participants who passed PVTs based on alternative cutoffs. Additionally, participants who failed PVTs based on original cutoffs also performed more poorly on many cognitive measures than participants who failed PVTs based on alternative cutoffs but not based on original cutoffs. These findings illustrate a gradient in the relationship between effort testing and clinical outcomes and supports past research that suggests performance validity lies on a continuum (Armistead-Jehle, Cooper, & Vanderploeg, 2016; Lippa, Agbayani, Hawes, Jokic, & Caroselli, 2014; Rogers, 2008; Walters, Berry, Rogers, Payne, & Granacher, 2009; Walters et al., 2008). Notably, Armistead-Jehle et al. (2016) previously demon- strated that in active duty service members with a history of possible mild TBI who passed two formal PVTs, performance validity test scores explained the greatest amount of variance (6.6%) in neurocognitive performance. Patients who performed perfectly on both PVTs performed significantly better (medium to large effect sizes) on RBANS Immediate Memory, Visuospatial, Language, Delayed Memory, and Total Score Indices. These findings are similar to the current study’s findings that found significantly better performance on all RBANS Indices (moderate to very large effect sizes) in patients who passed PVTs compared to patients who failed Alternative PVT cutoffs. The most striking differences in cognitive performance between the groups were noted on the memory indices. Only 1.2% of participants who passed Alternative PVT cutoffs scored ≥2 SDs below the mean of the normative sample in the Delayed Memory domain, while 9.4% of participants who failed PVTs based on alternative cutoffs and 50.0% of participants who failed PVTs based on original cutoffs scored at least two SDs below the mean of the normative sample. No participants who passed PVTs based on alternative cutoffs scored ≥2 SDs below the mean of the normative sample on the Immediate Memory Index, while 3.8% of participants who failed PVTs based on alternative cutoffs and 30.8% of participants who failed PVTs based on original cutoffs scored ≥2 SDs below the mean. A rather large number (22.4%) of participants who passed PVTs met the DSM-5 Category A criteria for mild neurocogni- tive disorder. Such a high rate of diagnosis in a healthy sample suggests that DSM-5 Category A criteria for mild neurocogni- tive disorder may be rather non-specific. The high rate of satisfying the DSM-5 Category A criteria for mild neurocognitive disorder in the present study could also be the result of several factors. First, this study was conducted with active duty service members, many of whom likely had comorbidities including depression, anxiety, PTSD, and chronic pain, which have all been shown to negatively affect cognitive performance (Drag, Spencer, Walker, Pangilinan, & Bieliauskas, 2012; Nicholson, Martelli, & Zasler, 2001; Vasterling et al., 2012; Verfaellie, Lafleche, Spiro, & Bousquet, 2014). Secondly, 30 participants were tested within a month of their injury, when cognitive impairments are more likely. Thirdly, financial incentives have been shown to be strongly related to neuropsychological performance (Binder & Rohling, 1996) and many of these evalua- tions were conducted in the context of a disability evaluation. Finally, the participants were all aware they were undergoing an evaluation related to their TBI. Diagnosis threat could have affected these participants’ performance, as this has been shown to negatively affect cognitive performance in MTBI patients (Blaine, Sullivan, & Edmed, 2013; Suhr & Gunstad, 2005). Additionally, it is acknowledged that this study defined mild and major neurocognitive disorder only by cognitive test performance. If the clinical interview and assessment of activities of daily living had been considered, it is possible these rates would be lower. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 615 The main limitation of the current study is that the RBANS EI, which was used to define both of the PVT failure groups, overlaps with the RBANS Digit Span and List Recognition subtests, as well as the Attention, Delayed Memory, and Total Scale Indices. We ultimately decided to keep these measures in our analyses, rather than removing or altering them, because they are all used clinically. It is important to note, however, that the differences between the groups on these measures may be artificially inflated. Additionally, because some participants were tested within a month of their injury, it is possible that they scored below the cutoff due to genuine cognitive sequelae and were therefore incorrectly identified as failing PVTs; the alternative cutoffs for the TOMM and RBANS EI used in this study have not been validated in mild TBI samples. Importantly, however, participants in the PVT-Fail groups were significantly further from their injury than participants in the PVT-Pass group. With most studies employing PVTs, it is possible that certain participants were misclassified, because PVT criteria are defined through a rather tautological method (Bigler, 2012). This is largely due to the fact that patients putting forth invalid test performances will not volunteer for validity studies and therefore there is no way to be certain that the crite- rion is accurate (Williams, 2011). Regardless of which cutoffs were used, there was a clear relationship between PVT failure and meeting the cognitive testing criteria for neurocognitive disorder and PCS diagnoses, suggesting that, even if the group classifications were not 100% accurate, the overall findings are valid. In conclusion, the current study illustrated that active duty service members with a history of MTBI who perform poorly on PVT measures are much more likely to fulfill the DSM-5 Category A criteria for neurocognitive disorder or ICD-10 symp- tom criteria for postconcussional syndrome. Presumably, many of these diagnoses would be inaccurate due to inadequate effort on testing and symptom exaggeration. Similar to previous findings supporting the use of updated cutoffs for the TOMM and RBANS EI in patients with mild TBI, the present findings demonstrate that these alternative cutoffs identify many more cases of possible invalid responding than the original cutoffs. Future research should continue to investigate the alternative PVT cutoffs examined here with mild TBI patients in non-military and/or non-disability settings. Additionally, continued research should investigate additional alternative cutoffs for use with mild TBI patients. By using only original cut- offs, clinicians and researchers may be missing some people with invalid performances. As always, PVT failure should be viewed in a clinical context. That is, failure of PVTs is indicative of invalid test data. Attribution of etiology may be more dif- ficult. Nonetheless, by taking test results at face value in one who fails PVTs, incorrect assumptions are likely to be made. Clinicians should consider adopting well-validated alternative PVT cutoffs for use with MTBI patients. Conflict of Interest None declared. Declaration and Disclosures The authors alone are responsible for the content and writing of the paper. Grant Iverson, Ph.D., has been reimbursed by the government, professional scientific bodies, and commercial organizations for discussing or presenting research relating to MTBI and sport-related concussion at meetings, scientific conferences, and symposiums. He has a clinical practice in forensic neuropsychology involving individuals who have sustained MTBIs. He has received honorariums for serving on research panels that provide scientific peer review of programs. The other authors report no competing or conflicts of interest. References Armistead-Jehle, P. (2010). Symptom validity test performance in U.S. veterans referred for evaluation of mild TBI. Applied Neuropsychology, 17,52–59. doi:10.1080/09084280903526182. Armistead-Jehle, P., & Buican, B. (2012). Evaluation context and Symptom Validity Test performances in a U.S. Military sample. Archives of Clinical Neuropsychology, 27, 828–839. doi:10.1093/arclin/acs086. Armistead-Jehle, P., Cooper, D. B., & Vanderploeg, R. D. (2016). The role of performance validity tests in the assessment of cogntivie functioning after mili- tary concussion: A replication and extension. Applied Neuropsychology: Adult, 23, 264–273. doi:10.1080/23279095.2015.1055564. Armistead-Jehle, P., Gervais, R. O., & Green, P. (2012). Memory complaints inventory and symptom validity test performance in a clinical sample. Archives of Clinical Neuropsychology, 27, 725–734. doi:10.1093/arclin/acs071. Armistead-Jehle, P., & Hansen, C. L. (2011). Comparison of the repeatable battery for the assessment of neuropsychological status effort index and stand- alone symptom validity tests in a military sample. Archives of Clinical Neuropsychology, 26, 592–601. doi:10.1093/arclin/acr049. Barker, M. D., Horner, M. D., & Bachman, D. L. (2010). Embedded indices of effort in the repeatable battery for the assessment of neuropsychological status (RBANS) in a geriatric sample. The Clinical Neuropsychologist, 24, 1064–1077. doi:10.1080/13854046.2010.486009. Bauer, L., O’Bryant, S. E., Lynch, J. K., McCaffrey, R. J., & Fisher, J. M. (2007). Examining the test of memory malingering trial 1 and word memory test immediate recognition as screening tools for insufficient effort. Assessment, 14, 215–222. doi:10.1177/1073191106297617. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 616 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 Bigler, E. D. (2012). Symptom validity testing, effort, and neuropsychological assessment. Journal of the International Neuropsychological Society, 18, 632–640. Binder, L. M., & Rohling, M. L. (1996). Money matters: A meta-analytic review of the effects of financial incentives on recovery after closed-head injury. American Journal of Psychiatry, 153,7–10. doi:10.1097/00001199-199608000-00012. Blaine, H., Sullivan, K. A., & Edmed, S. L. (2013). The effect of varied test instructions on neuropsychological performance following mild traumatic brain injury: An investigation of “diagnosis threat”. Journal of Neurotrauma, 30, 1405–1414. doi:10.1089/neu.2013.2865. Boyle, E., Cancelliere, C., Hartvigsen, J., Carroll, L. J., Holm, L. W., & Cassidy, J. D. (2014). Systematic review of prognosis after mild traumatic brain injury in the military: Results of the International Collaboration on Mild Traumatic Brain Injury Prognosis. Arcives of Physical Medicine and Rehabilitation, 95, S230–237. doi:10.1016/j.apmr.2013.08.297. Brooks, B. L., Iverson, G. L., & Holdnack, J. A. (2013). Understanding and using multivariate base rates with the WAIS-IV/WMS-IV. In Holdnack J. A., Drozdick L. W., Weiss L. G., & Iverson G. L. (Eds.), WAIS-IV/WMS-IV/ACS: Advanced clinical interpretation. San Diego, CA: Elsevier Science. Bush, S. S., Ruff, R. M., Troster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., et al. (2005). Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee. Archives of Clinical Neuropsychology, 20, 419–426. doi:10.1016/j.acn.2005.02.002. Carroll, L. J., Cassidy, J. D., Holm, L., Kraus, J., & Coronado, V. G. (2004). Methodological issues and research recommendations for mild traumatic brain injury: The WHO collaborating centre task force on mild traumatic brain injury. Journal of Rehabilitation Medicine, 36, 113–125. doi:http://dx.doi.org/ 10.1016/j.apmr.2013.04.026. Carter, K. R., Scott, J. G., Adams, R. L., & Linck, J. (2016). Base rate comparison of suboptimal scores on the RBANS effort scale and effort index in Parkinson’s disease. The Clinical Neuropsychologist, 30, 1118–1125. doi:10.1080/13854046.2016.1206145. Cicerone, K. D., & Karlmar, K. (1995). Persistent postconcussion syndrome: The structure of subjective complaints after mild traumatic brain injuury. Journal of Head Trauma Rehabilitation, 10 (3), 1–17. doi:10.1097/00001199-199506000-00002. Clark, A. L., Sorg, S. F., Schiehser, D. M., Bigler, E. D., Bondi, M. W., Jacobson, M. W., et al. (2016). White matter associations with performance validity testing in veterans with mild traumatic brain injury: The utility of biomarkers in complicated assessment. Journal of Head Trauma Rehabil, 31, 346–359. doi:10.1097/HTR.0000000000000183. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Constantinou, M., Bauer, L., Ashendorf, L., Fisher, J. M., & McCaffrey, R. J. (2005). Is poor performance on recognition memory effort measures indicative of generalized poor performance on neuropsychological tests? Archives of Clinical Neuropsychology, 20, 191–198. doi:10.1016/j.acn.2004.06.002. Crighton, A. H., Wygant, D. B., Holt, K. R., & Granacher, R. P. (2015). Embedded effort scales in the repeatable battery for the assessment of neuropsycho- logical status: Do they detect neurocognitive malingering? Archives of Clinical Neuropsychology, 30, 181–185. doi:10.1093/arclin/acv002. Denning, J. H. (2012). The efficiency and accuracy of the test of memory malingering trial 1, errors on the first 10 items of the test of memory malingering, and five embedded measures in predicting invalid test performance. Archives of Clinical Neuropsychology, 27, 417–432. doi:10.1093/arclin/acs044. Drag, L. L., Spencer, R. J., Walker, S. J., Pangilinan, P. H., & Bieliauskas, L. A. (2012). The contributions of self-reported injury characteristics and psychiat- ric symptoms to cognitive functioning in OEF/OIF veterans with mild traumatic brain injury. Journal of the International Neuropsychological Society, 18, 576–584. doi:10.1017/S1355617712000203. Dunham, K. J., Shadi, S., Sofko, C. A., Denney, R. L., & Calloway, J. (2014). Comparison of the repeatable battery for the assessment of neuropsychological status effort scale and effort index in a dementia sample. Archives of Clinical Neuropsychology, 29, 633–641. doi:10.1093/arclin/acu042. Green, P. (2007). The pervasive influence of effort on neuropsychological tests. Physical Medicine and Rehabilitation Clinics of North America, 18,43–68. doi:10.1016/j.pmr.2006.11.002. Green, P., Rohling, M. L., Lees-Haley, P. R., & Allen, L. M., 3rd (2001). Effort has a greater effect on test scores than severe brain injury in compensation claimants. Brain Injury, 15, 1045–1060. doi:10.1080/02699050110088254. Greve, K. W., & Bianchini, K. J. (2004). Setting empirical cut-offs on psychometric indicators of negative response bias: A methodological commentary with recommendations. Archives of Clinical Neuropsychology, 19, 533–541. doi:10.1016/j.acn.2003.08.002. Greve, K. W., & Bianchini, K. J. (2007). Detection of cognitive malingering with tests of executive function. In Larrabee G. J. (Ed.), Assessment of malin- gered neuropsychological deficits (pp. 171–225). New York: Oxford University Press. Greve, K. W., Bianchini, K. J., & Doane, B. M. (2006). Classification accuracy of the test of memory malingering in traumatic brain injury: Results of a known-groups analysis. Journal of Clinical and Experimental Neuropsychology, 28, 1176–1190. doi:10.1080/13803390500263550. Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., & Millis, S. R. (2009). American academy of clinical neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23, 1093–1129. doi:10.1080/ Hoge, C. W., McGurk, D., Thomas, J. L., Cox, A. L., Engel, C. C., & Castro, C. A. (2008). Mild traumatic brain injury in U.S. Soldiers returning from Iraq. New England Journal of Medicine, 358, 453–463. doi:10.1056/NEJMoa072972. Hook, J. N., Marquine, M. J., & Hoelzle, J. B. (2009). Repeatable battery for the assessment of neuropsychological status effort index performance in a medi- cally ill geriatric sample. Archives of Clinical Neuropsychology, 24, 231–235. doi:10.1093/arclin/acp026. Iverson, G. L. (2006). Ethical issues associated with the assessment of exaggeration, poor effort, and malingering. Applied Neuropsychology, 13,77–90. doi:10.1207/s15324826an1302_3. Iverson, G. L., Gardner, A. J., Terry, D. P., Ponsford, J. L., Sills, A. K., Broshek, D. K., et al. (2017). Predictors of clinical recovery from concussion: A sys- tematic review. British Journal of Sports Medicine, 51, 941–948. doi:10.1136/bjsports-2017-097729. Iverson, G. L., Lange, R. T., Brooks, B. L., & Rennison, V. L. (2010). “Good old days” bias following mild traumatic brain injury. The Clinical Neuropsychologist, 24,17–37. doi:10.1080/13854040903190797. Iverson, G. L., Zasler, N. D., & Lange, R. T. (2006). Post-concussive disorder. In Zasler N. D., Katz D. I., & Zafonte R. D. (Eds.), Brain injury medicine: Principles and practice (pp. 373–405). New York: Demos Medical Publishing. Jones, A. (2013). Test of memory malingering: Cutoff scores for psychometrically defined malingering groups in a military sample. The Clinical Neuropsychologist, 27, 1043–1059. doi:10.1080/13854046.2013.804949. Jones, A. (2016). Repeatable battery for the assessment of neuropsychological status: Effort index cutoff scores for psychometrically defined malingering groups in a military sample. Archives of Clinical Neuropsychology, 31, 273–283. doi:10.1093/arclin/acw006. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 617 Kirkwood, M. W., Peterson, R. L., Connery, A. K., Baker, D. A., & Grubenhoff, J. A. (2014). Postconcussive symptom exaggeration after pediatric mild trau- matic brain injury. Pediatrics, 133, 643–650. doi:10.1542/peds.2013-3195. Lagarde, E., Salmi, L. R., Holm, L. W., Contrand, B., Masson, F., Ribereau-Gayon, R., et al. (2014). Association of symptoms following mild traumatic brain injury with posttraumatic stress disorder vs. postconcussion syndrome. JAMA Psychiatry, 71, 1032–1040. doi:10.1001/jamapsychiatry.2014.666. Lange, R. T., Iverson, G. L., Brooks, B. L., & Rennison, V. L. (2010). Influence of poor effort on self-reported symptoms and neurocognitive test perfor- mance following mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 32, 961–972. doi:10.1080/13803391003645657. Lange, R. T., Pancholi, S., Bhagwat, A., Anderson-Barnes, V., & French, L. (2012). Influence of Poor Effort on Neuropsychological Test Performance in Military Personnel following Mild Traumatic Brain Injury. Journal of Clinical and Experimental Neuropsychology, 34, 453–466. doi:10.1080/13803395. 2011.648175. Larrabee, G. J. (2012). Performance validity and symptom validity in neuropsychological assessment. Journal of the International Neuropsychological Society, 18, 625–630. Lippa, S. M., Agbayani, K. A., Hawes, S., Jokic, E., & Caroselli, J. S. (2014). Effort in acute traumatic brain injury: considering more than pass/fail. Rehabilitation Psychology, 59, 306–312. doi:10.1037/a0037217. Lippa, S. M., Hawes, S., Jokic, E., & Caroselli, J. S. (2013). Sensitivity of the RBANS to acute traumatic brain injury and length of post-traumatic amnesia. Brain Injury, 27, 689–695. doi:10.3109/02699052.2013.771793. Lippa, S. M., Lange, R. T., Bhagwat, A., & French, L. M. (2017). Clinical utility of embedded performance validity tests on the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) following mild traumatic brain injury. Applied Neuropsychol Adult, 24,73–80. doi:10.1080/ 23279095.2015.1100617. Lippa, S. M., Pastorek, N. J., Romesser, J., Linck, J., Sim, A. H., Wisdom, N. M., et al. (2014). Ecological validity of performance validity testing. Archives of Clinical Neuropsychology, 29, 236–244. doi:10.1093/arclin/acu002. MacGregor, A. J., Shaffer, R. A., Dougherty, A. L., Galarneau, M. R., Raman, R., Baker, D. G., et al. (2010). Prevalence and psychological correlates of trau- matic brain injury in operation Iraqi freedom. The Journal of Head Trauma Rehabilitation, 25 (1), 1–8. doi:10.1097/HTR.0b013e3181c2993d. Managment of Concussion/mTBI Working Group. (2009). VA/DoD clinical practice guideline for managment of concussion/mild traumatic brain injury. Journal of Rehabilitation Research and Development, 46, CP1–68. http://www.rehab.research.va.gov/jour/09/46/6/pdf/cpg.pdf. McCormick, C. L., Yoash-Gantz, R. E., McDonald, S. D., Campbell, T. C., & Tupler, L. A. (2013). Performance on the Green Word Memory Test Following Operation Enduring Freedom/Operation Iraqi Freedom-Era Military Service: Test Failure is Related to Evaluation Context. Archives of Clinical Neuropsychology, 28, 808–823. doi:10.1093/arclin/act050. McCrea, M., Pliskin, N., Barth, J., Cox, D., Fink, J., French, L., et al. (2008). Official position of the military TBI task force on the role of neuropsychology and rehabilitation psychology in the evaluation, management, and research of military veterans with traumatic brain injury. The Clinical Neuropsychologist, 22,10–26. doi:10.1080/13854040701760981. McKay, C., Casey, J. E., Wertheimer, J., & Fichtenberg, N. L. (2007). Reliability and validity of the RBANS in a traumatic brain injured sample. Archives of Clinical Neuropsychology, 22,91–98. doi:10.1016/j.acn.2006.11.003. McKay, C., Wertheimer, J. C., Fichtenberg, N. L., & Casey, J. E. (2008). The repeatable battery for the assessment of neuropsychological status (RBANS): Clinical utility in a traumatic brain injury sample. The Clinical Neuropsychologist, 22, 228–241. doi:10.1080/13854040701260370. Mild Traumatic Brain Injury Committee, American Congress of Rehabilitation Medicine, & Head Injury Interdisciplinary Special Interest Group. (1993). Definition of mild traumatic brain injury. Journal of Head Trauma Rehabilitation, 8,86–87. https://www.acrm.org/wp-content/uploads/pdf/TBIDef_ English_10-10.pdf. Moore, R. C., Davine, T., Harmell, A. L., Cardenas, V., Palmer, B. W., & Mausbach, B. T. (2013). Using the repeatable battery for the assessment of neuro- psychological status (RBANS) effort index to predict treatment group attendance in patients with schizophrenia. Journal of the International Neuropsychological Society, 19, 198–205. doi:10.1017/S1355617712001221. Morra, L. F., Gold, J. M., Sullivan, S. K., & Strauss, G. P. (2015). Predictors of neuropsychological effort test performance in schizophrenia. Schizophrenia Research, 162, 205–210. doi:10.1016/j.schres.2014.12.033. Mossman, D., Wygant, D. B., Gervais, R. O., & Hart, K. J. (2017). ). Trial 1 versus Trial 2 of the test of memory malingering: Evaluating accuracy without a “Gold Standard”. Psychological Assessment, Advance online publication. doi:10.1037/pas0000449. Nelson, N. W., Hoelzle, J. B., McGuire, K. A., Ferrier-Auerbach, A. G., Charlesworth, M. J., & Sponheim, S. R. (2010). Evaluation context impacts neuro- psychological performance of OEF/OIF veteranswith reported combat-related concussion. Archives of Clinical Neuropsychology, 25, 713–723. doi:10. 1093/arclin/acq075. Nicholson, K., Martelli, M. F., & Zasler, N. D. (2001). Does pain confound interpretation of neuropsychological test results? NeuroRehabilitation, 16, 225–230. http://www.ncbi.nlm.nih.gov/pubmed/11790908. Pachet, A. K. (2007). Construct validity of the Repeatable Battery of Neuropsychological Status (RBANS) with acquired brain injury patients. The Clinical Neuropsychologist, 21, 286–293. doi:10.1080/13854040500376823. Paulson, D., Horner, M. D., & Bachman, D. (2015). A comparison of four embedded validity indices for the RBANS in a memory disorders clinic. Archives of Clinical Neuropsychology, 30, 207–216. doi:10.1093/arclin/acv009. Proto, D. A., Pastorek, N. J., Miller, B. I., Romesser, J. M., Sim, A. H., & Linck, J. F. (2014). The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms. Archives of Clinical Neuropsychology, 29, 614–624. doi:10.1093/ arclin/acu044. Randolph, C., Tierney, M. C., Mohr, E., & Chase, T. N. (1998). The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): Preliminary clinical validity. Journal of Clinical and Experimental Neuropsychology, 20, 310–319. doi:10.1076/jcen.20.3.310.823. Rogers, R. (2008). Clinical assessment of malingering and deception (3rd ed.). New York: Guilford. Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology, 22, 213–223. doi:10.1016/j.acn.2006.12.004. Sieck, B. C., Smith, M. M., Duff, K., Paulsen, J. S., & Beglinger, L. J. (2013). Symptom validity test performance in the Huntington Disease Clinic. Archives of Clinical Neuropsychology, 28, 135–143. doi:10.1093/arclin/acs109. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 618 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 Silverberg, N. D., Wertheimer, J. C., & Fichtenberg, N. L. (2007). An effort index for the Repeatable Battery For The Assessment Of Neuropsychological Status (RBANS). The Clinical Neuropsychologist, 21, 841–854. doi:10.1080/13854040600850958. Solomon, G. S., Kuhn, A. W., & Zuckerman, S. L. (2016). Depression as a modifying factor in sport-related concussion: A critical review of the literature. The Physician and Sportsmedicine, 44,14–19. doi:10.1080/00913847.2016.1121091. Stenclik, J. H., Miele, A. S., Silk-Eglit, G., Lynch, J. K., & McCaffrey, R. J. (2013). Can the sensitivity and specificity of the TOMM be increased with differ- ential cutoff scores? Applied Neuropsychology Adult, advanced Online Publication. doi:10.1080/09084282.2012.704603. Stevens, A., Friedel, E., Mehren, G., & Merten, T. (2008). Malingering and uncooperativeness in psychiatric and psychological assessment: Prevalence and effects in a German sample of claimants. Psychiatry Research, 157, 191–200. doi:10.1016/j.psychres.2007.01.003. Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (3rd ed.). Oxford; New York: Oxford University Press. Suhr, J. A., & Gunstad, J. (2005). Further exploration of the effect of “diagnosis threat” on cognitive performance in individuals with mild head injury. Journal of the International Neuropsychological Society, 11,23–29. doi:10.1017/S1355617705050010. Terrio, H., Brenner, L. A., Ivins, B. J., Cho, J. M., Helmick, K., Schwab, K., et al. (2009). Traumatic brain injury screening: Preliminary findings in a US Army Brigade Combat Team. The Journal of Head Trauma Rehabilitation, 24,14–23. doi:10.1097/HTR.0b013e31819581d8. Tombaugh, T. (1996). TOMM: Test of memory malingering. North Tonawanda, NY: Multi-Health Systems Inc. Tsanadis, J., Montoya, E., Hanks, R. A., Millis, S. R., Fichtenberg, N. L., & Axelrod, B. N. (2008). Brain injury severity, litigation status, and self-report of postconcussive symptoms. The Clinical Neuropsychologist, 22, 1080–1092. doi:10.1080/13854040701796928. Vasterling, J. J., Brailey, K., Proctor, S. P., Kane, R., Heeren, T., & Franz, M. (2012). Neuropsychological outcomes of mild traumatic brain injury, post- traumatic stress disorder and depression in Iraq-deployed US Army soldiers. British Journal of Psychiatry, 201, 186–192. doi:10.1192/bjp.bp.111.096461. Verfaellie, M., Lafleche, G., Spiro, A., & Bousquet, K. (2014). Neuropsychological outcomes in OEF/OIF veterans with self-report of blast exposure: Associations with mental health, but not MTBI. Neuropsychology, 28, 337–346. doi:10.1037/neu0000027. Walters, G. D., Berry, D. T., Rogers, R., Payne, J. W., & Granacher, , R. P., , Jr. (2009). Feigned neurocognitive deficit: Taxon or dimension? Journal of Clinical and Experimental Neuropsychology, 31, 584–593. doi:10.1080/13803390802363728. Walters, G. D., Rogers, R., Berry, D. T., Miller, H. A., Duncan, S. A., McCusker, P. J., et al. (2008). Malingering as a categorical or dimensional construct: The latent structure of feigned psychopathology as measured by the SIRS and MMPI-2. Psychological Assessment, 20, 238–247. doi:10.1037/1040-3590. 20.3.238. Whitney, K. A., Shepard, P. H., Williams, A. L., Davis, J. J., & Adams, K. M. (2009). The medical symptom validity test in the evaluation of operation Iraqi freedom/operation enduring freedom soldiers: A preliminary study. Archives of Clinical Neuropsychology, 24, 145–152. doi:10.1093/arclin/acp020. Williams, J. M. (2011). The malingering factor. Archives of Clinical Neuropsychology, 26, 280–285. doi:10.1093/arclin/acr009. World Health Organization. (1992). The ICD-10 classification of mental and behavioural disorders: Clinical descriptions and diagnostic guidelines. Geneva: World Health Organization. Young, J. C., Baughman, B. C., & Roper, B. L. (2012). Validation of the repeatable battery for the assessment of neuropsychological status—effort index in a veteran sample. The Clinical Neuropsychologist, 26, 688–699. doi:10.1080/13854046.2012.679624. Zimmer, A., Heyanka, D., & Proctor-Weber, Z. (2017). Concordance validity of PVTs in a sample of veterans referred for mild TBI. Applied Neuropsychology Adult, Advanced Online Publication,1–10. doi:10.1080/23279095.2017.1319835. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Clinical Neuropsychology Oxford University Press

Performance Validity, Neurocognitive Disorder, and Post-concussion Symptom Reporting in Service Members with a History of Mild Traumatic Brain Injury

Loading next page...
 
/lp/ou_press/performance-validity-neurocognitive-disorder-and-post-concussion-8UmeexcyDE
Publisher
Oxford University Press
Copyright
Copyright © 2022 Oxford University Press
ISSN
0887-6177
eISSN
1873-5843
DOI
10.1093/arclin/acx098
Publisher site
See Article on Publisher Site

Abstract

Objective: To examine the influence of different performance validity test (PVT) cutoffs on neuropsychological performance, post- concussion symptoms, and rates of neurocognitive disorder and postconcussional syndrome following mild traumatic brain injury (MTBI) in active duty service members. Method: Participants were 164 service members (Age: M = 28.1 years [SD = 7.3]) evaluated on average 4.1 months (SD = 5.0) follow- ing injury. Participants were divided into three mutually exclusive groups using original and alternative cutoff scores on the Test of Memory Malingering (TOMM) and the Effort Index (EI) from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): (a) PVT-Pass, n = 85; (b) Alternative PVT-Fail, n = 53; and (c) Original PVT-Fail, n = 26. Participants also completed the Neurobehavioral Symptom Inventory. Results: The PVT-Pass group performed better on cognitive testing and reported fewer symptoms than the two PVT-Fail groups. The Original PVT-Fail group performed more poorly on cognitive testing and reported more symptoms than the Alternative PVT-Fail group. Both PVT-Fail groups were more likely to meet DSM-5 Category A criteria for mild and major neurocognitive disorder and symptom reporting criteria for post- concussional syndrome than the PVT-Pass group. When alternative PVT cutoffs were used instead of original PVT cutoffs, the number of partici- pants with valid data meeting cognitive testing criteria for neurocognitive disorder or postconcussional syndrome decreased dramatically. Conclusion: PVT performance is significantly and meaningfully related to overall neuropsychological outcome. By using only original cutoffs, clinicians and researchers may miss people with invalid performances. Keywords: Mild traumatic brain injury; Performance validity; Neurocognitive disorder; Postconcussional syndrome; Effort Introduction Traumatic brain injury (TBI) has been one of the most common injuries sustained by US military service members during the conflicts in Iraq and Afghanistan (Hoge et al., 2008; Terrio et al., 2009), with the majority of these injuries (i.e., 85%; © Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US. doi:10.1093/arclin/acx098 Advance Access publication on 21 October 2017 Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 607 MacGregor et al., 2010) being classified as mild. Typically, symptoms improve and resolve within days, weeks, or months following a mild TBI (MTBI); however, for a small minority of people, non-specific symptoms and problems may persist long term (Carroll et al., 2014). There are many reasons why a person might have chronic symptoms and problems following a MTBI, such as having co-occurring difficulties with traumatic stress (Boyle et al., 2014; Lagarde et al., 2014) and depression (Iverson et al., 2017; Solomon, Kuhn, & Zuckerman, 2016), or involvement in compensation claims, such as worker’s compensation, per- sonal injury litigation, or disability determinations (Binder & Rohling, 1996; Iverson, Zasler, & Lange, 2006). For people who do not recover in a timely manner, a comprehensive neuropsychological evaluation is recommended (McCrea et al., 2008). Assessment of performance validity (Larrabee, 2012) is considered standard practice when conducting neuropsychological evaluations of individuals who are involved in litigation or compensation-related evaluations. Formal evaluation of perfor- mance validity is considered an integral component of all neuropsychological evaluations and has been strongly supported in position papers by two prominent neuropsychological societies (Bush et al., 2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009). Additionally, the Military TBI Task Force (McCrea et al., 2008) recommended that all neuropsychological assessment methods include performance validity tests (PVTs). It has been estimated that 17% (Whitney, Shepard, Williams, Davis, & Adams, 2009) to 58% (Armistead-Jehle, 2010) of veterans and service members presenting for neuropsychological evaluation fail PVTs. Typically, neuropsychological assessment in service members contributes to decision-making regarding whether they are fit for duty, are deployable, should be restricted from hazardous duty, and/or require a medical board evalua- tion or disability pension. These unique opportunities for secondary gain may explain the high rate of PVT failure in military populations. Indeed, rates of PVT failure vary significantly by evaluation context (Armistead-Jehle & Buican, 2012; Nelson et al., 2010; McCormick, Yoash-Gantz, McDonald, Campbell, & Tupler, 2013). There is substantial evidence that failure of PVTs is associated with low neuropsychological test scores (Constantinou, Bauer, Ashendorf, Fisher, & McCaffrey, 2005; Green, 2007; Green, Rohling, Lees-Haley, & Allen, 2001; Iverson, 2006; Stevens, Friedel, Mehren, & Merten, 2008; Lange, Pancholi, Bhagwat, Anderson-Barnes, & French, 2012). PVT failure has also been associated with worse self-reported general functioning (Lippa, Pastorek, et al., 2014), greater cognitive complaints (Armistead-Jehle, Gervais, & Green, 2012), and greater post-concussive symptoms (Iverson, Lange, Brooks, & Rennison, 2010; Kirkwood, Peterson, Connery, Baker, & Grubenhoff, 2014; Lange, Iverson, Brooks, & Rennison, 2010; Tsanadis et al., 2008). Therefore, failure of PVTs is associated with greater symptoms, worse cognitive functioning, and greater disability— but the accuracy of those clinical outcome measures is uncertain. One of the most widely used PVTs (Sharland & Gfeller, 2007) is the Test of Memory Malingering (TOMM; Tombaugh, 1996). The TOMM manual specifies cutoffs for identifying response bias. However, more recent studies have suggested the use of higher cutoff scores on the TOMM in MTBI patients can increase its sensitivity to detect invalid performance while maintaining acceptable specificity (e.g., ≤48 on Trial 2; Greve, Bianchini, & Doane, 2006; Jones, 2013; Stenclik, Miele, Silk-Eglit, Lynch, & McCaffrey, 2013). Moreover, several studies have used TOMM Trial 1 as an indicator of invalid performance in MTBI patients, with cutoff scores ranging from ≤38 to ≤42 (Bauer, O’Bryant, Lynch, McCaffrey, & Fisher, 2007; Denning, 2012; Greve et al., 2006; Jones, 2013; Stenclik et al., 2013). Denning (2012) provides a summary of multiple studies that have investigated TOMM Trial 1 cutoffs, the weighted aver- age of which was a cutoff of ≤40. Administering formal stand-alone PVTs, such as the TOMM, typically takes 10–20 min (Strauss, Sherman, & Spreen, 2006), which may reduce the amount of time available for administration of other neuropsychological tests. As such, there has been a recent focus on the development and validation of PVTs that are embedded into existing cognitive tests. One such embedded PVT is the Effort Index (EI; Silverberg, Wertheimer, & Fichtenberg, 2007) from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS; Randolph, Tierney, Mohr, & Chase, 1998). The RBANS EI has been used in a number of studies with service members and veterans (Jones, 2016; Lippa, Lange, Bhagwat, & French, 2017; Paulson, Horner, & Bachman, 2015; Zimmer, Heyanka, & Proctor-Weber, 2017); individuals with Parkinson’s disease (Carter, Scott, Adams, & Linck, 2016), schizophrenia (Moore et al., 2013; Morra, Gold, Sullivan, & Strauss, 2015), Huntington’s disease (Sieck, Smith, Duff, Paulsen, & Beglinger, 2013), and dementia (Dunham, Shadi, Sofko, Denney, & Calloway, 2014; Morra et al., 2015); and in those involved in compensation-seeking disability evaluations (Crighton, Wygant, Holt, & Granacher, 2015). The EI is calculated by creating a weighted composite score from performance on the Digit Span and List Recognition subtests. In their original study, Silverberg et al. (2007) concluded that an EI score >3 re- flects invalid performance because it was rarely found in the general population. The authors also concluded that an EI score ≥1 is most sensitive with patients with post-acute MTBI when the goal is to determine if the performance is invalid. Other studies have supported the cutoff of >3 in geriatric (Barker, Horner, & Bachman, 2010; Hook, Marquine, & Hoelzle, 2009) and veteran samples (Barker et al., 2010; Young, Baughman, & Roper, 2012). An investigation of the two suggested cutoffs in an MTBI sample revealed that the ≥1 cutoff classified MTBI participants more accurately than the >3 cutoff (Armistead- Jehle & Hansen, 2011). Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 608 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 Although alternative PVT cutoffs have been supported for use with MTBI patients in the literature, these cutoffs do not yet appear to be widely adopted for use with this population. We have observed that many clinical neuropsychologists do not apply the newer, more sensitive, and slightly less specific PVT cutoffs with their MTBI patients. Recent MTBI research stud- ies continue to define TOMM failure based on original, rather than alternative, cutoffs (Clark et al., 2016; Proto et al., 2014; Verfaellie, Lafleche, Spiro, & Bousquet, 2014). This might be due in part to the emphasis on prioritizing specificity and posi- tive predictive power over sensitivity and negative predictive power when it comes to the use of PVTs (Greve & Bianchini, 2004, 2007). Given the discrepancy between research supporting alternative PVT cutoffs following MTBI, and the continued use of original PVT cutoffs, the present study examined the influence of using original versus alternative PVT cutoffs in clinical decision-making. It was hypothesized that: (a) participants evaluated following an MTBI who failed PVTs based on original cutoffs would be most likely to meet cognitive testing criteria for Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) mild and major neurocognitive disorder and symptom reporting criteria for the International Classification of Diseases 10th edition (ICD-10) postconcussional syndrome (PCS); (b) participants who passed PVTs would be least likely to meet criteria for these diagnoses; and (c) participants who failed PVTs based on alternative cutoffs, but not based on origi- nal cutoffs, would meet criteria for these diagnoses more frequently than participants who passed PVTs, but less frequently than participants who failed PVTs based on original cutoffs. Methods Participants Participants were 164 U.S. military service members who sustained an MTBI and were evaluated at the Walter Reed Army Medical Center (WRAMC), Washington, DC. Participants were selected from a larger sample of 313 military service mem- bers diagnosed with MTBI who were evaluated at WRAMC between November 2003 and January 2011 following a sus- pected or confirmed TBI, and who had agreed to the use of their clinical data for research purposes. The protocols under which these data were collected were approved by the Institutional Review Board of WRAMC. This study was completed in accordance with the guidelines of the Declaration of Helsinki. All patients had been referred to the neuropsychology TBI service at WRAMC for consultation as a result of self-reported or clinician reported concern about their cognitive functioning. Patients were included in the original sample if they: (a) had sustained an MTBI, (b) were administered the RBANS, and (c) had not been administered the RBANS previously by the neu- ropsychology TBI service. Participants were excluded from the final sample if they were more than 2 years post-injury (n = 22), were missing time since injury data (n = 3), did not complete or had an incomplete Neurobehavioral Symptom Inventory (NSI; Cicerone & Karlmar, 1995, n = 23), had incomplete RBANS data (n = 6), were not administered the TOMM (n = 53), or scored 45–48 on TOMM Trial 1 and were not administered TOMM Trial 2 (n = 46). Participants were administered the RBANS, TOMM, and the NSI for the purposes of providing a brief screen of cognitive functioning and post-concussive symptoms. For the participants who scored 49 or 50 on TOMM Trial 1 (n = 79), TOMM Trial 2 was not administered, under the assumption that participants would score at least as well on Trial 2 as they did on Trial 1. Of 453 patients who scored 49–50 on TOMM Trial 1, only 2 (0.4%) scored 48 and none scored below 48 on Trial 2 (Mossman, Wygant, Gervais, & Hart, 2017). No participants in the current study were administered the TOMM retention trial. Mild TBI was defined as a period of self-reported loss of consciousness (LOC) ≤15 min, posttraumatic amnesia ≤24 h, and/or altered mental status ≤24 h following a credible injury mechanism. Although we would have preferred to use an LOC criterion of <30 min, consistent with commonly used diagnostic criteria (Carroll, Cassidy, Holm, Kraus, & Coronado, 2004; Managment of Concussion/mTBI Working Group, 2009; Mild Traumatic Brain Injury Committee, American Congress of Rehabilitation Medicine, & Head Injury Interdisciplinary Special Interest Group, 1993), the available information regarding LOC was limited to categorical data that did not allow us to differentiate between duration of LOC greater than or less than 30 min (i.e., available data = LOC < 15 min and LOC 16–60 min). Glasgow Coma Scale scores were not available for most participants. Participants were divided into three mutually exclusive groups based on their performance on the TOMM and RBANS EI. Participants were included in the Original PVT-Fail group if they failed either the RBANS EI with a score >3(Barker et al., 2010; Hook et al., 2009; Young et al., 2012) or if TOMM Trial 2 was below the cutoff specified in the manual. They were included in the Alternative PVT-Fail group if they did not meet criteria for the Original PVT-Fail group and scored 1–3on the RBANS EI (Armistead-Jehle & Hansen, 2011), ≤40 on TOMM Trial 1 (Denning, 2012), or 45–48 on TOMM Trial 2 Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 609 (Greve et al., 2006; Jones, 2013). Participants were included in the PVT-Pass group if they scored >40 on TOMM Trial 1, >48 on TOMM Trial 2 (if administered), and 0 on the RBANS EI. A total of 26 participants were included in the Original PVT-Fail group. Of these, 24 failed TOMM Trial 2 with a score <45 and 8 participants failed the RBANS EI with a score >3. A total of 53 participants were in the Alternative PVT-Fail group. Of these, 26 failed TOMM Trial 1 with a score ≤40, 22 failed TOMM Trial 2 with a score of 45–48, and 33 failed the RBANS EI with a score of 1–3. A total of 85 participants were in the PVT-Pass group. Measures and Procedure Neurocognitive Assessment and DSM-5 Category A Criteria for Mild and Major Neurocognitive Disorder The RBANS (Randolph et al., 1998) is a battery of 12 subtests assessing 5 cognitive domains, including Immediate Memory/Learning, Visuospatial/Constructional, Language, Attention, and Delayed Memory. The RBANS was originally developed to screen for dementia, but has also been validated for use in patients following TBI (Lippa, Hawes, Jokic, & Caroselli, 2013; McKay, Casey, Wertheimer, & Fichtenberg, 2007; McKay, Wertheimer, Fichtenberg, & Casey, 2008; Pachet, 2007). The DSM-5 Category A criteria for mild neurocognitive disorder indicates that a patient demonstrates perfor- mance ≥1SD below expectations in at least one cognitive domain (American Psychiatric Association, 2013). The DSM-5 Category A criteria for major neurocognitive disorder indicates that the patient demonstrates performance ≥2 SD below ex- pectations in at least one cognitive domain. Importantly, a clinical diagnosis of mild or major neurocognitive disorder requires clinical judgment, and in the case of major neurocognitive disorder, evidence of reduced independence in daily activities. The present study defined mild neurocognitive disorder as present when at least two scores in a single cognitive domain were ≥1 SD below the mean. The cognitive domains of Immediate Memory, Visuospatial Functioning, Language, and Attention each consisted of only two subtests and therefore performance must have been ≥1 SD below the mean on both subtests in at least one domain for the participant to meet DSM-5 Category A criteria for mild neurocognitive disorder. For the Delayed Memory domain, participants must have been ≥1 SD below the mean on at least two of the following subtest: List Recall, Story Recall, or Figure Recall. Major neurocognitive disorder was defined as present when at least two performances in a sin- gle cognitive domain were ≥2 SD below the mean of the normative sample. We required two scores below the respective cut- offs because obtaining a single low score is fairly common in healthy adults (see Brooks, Iverson, and Holdnack (2013) for a detailed discussion). Post-Concussive Symptoms and Postconcussional Syndrome The NSI (Cicerone & Karlmar, 1995) is a 22-item questionnaire assessing post-concussive symptoms. Participants are asked to rate symptoms on a 5-point Likert scale ranging from 0 (none) to 4 (very severe). Each of the NSI symptoms was analyzed individually and a total score consisting of all 22 items was also computed. Participants were classified into groups reflecting whether they endorsed symptoms reflective of PCS based on ICD-10 cri- teria. The ICD-10 criteria for PCS require a person to have a history of ‘head trauma with a loss of consciousness’ preceding the onset of symptoms by a period of up to 4 weeks and have symptoms in at least three of six categories: physical symptoms, emotional symptoms, cognitive symptoms, insomnia, reduced tolerance to alcohol, and preoccupation with the aforemen- tioned symptoms and fear of permanent brain damage (World Health Organization, 1992). For the present study, we did not require prior injuries to have a witnessed loss of consciousness. Additionally, because the NSI contains symptoms covering the physical, emotional, cognitive, and insomnia domains, but not the alcohol tolerance and preoccupation with symptoms/ brain damage domains, only the first four domains were evaluated. It is important to acknowledge that symptoms endorsed on the NSI should be evaluated clinically. Clinical diagnosis of PCS is multifaceted and requires a comprehensive evaluation (see Iverson and colleagues (2013) for a comprehensive discussion). For the purposes of this study, however, participants were classified as meeting symptom criteria for PCS if they endorsed at least one symptom in at least three of the four physi- cal, emotional, cognitive, and insomnia domains as moderate (i.e., a score of 2 on a scale of 0–4) or greater on the NSI. Otherwise, participants’ responses on the NSI were classified ‘within normal limits.’ Statistical Analyses Chi-square analysis and ANOVAs were used to compare the three participant groups in terms of demographic variables, self-reported post-concussive symptoms, cognitive test scores, and the base rates of meeting DSM-5 Category A criteria for Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 610 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 mild and major neurocognitive disorder and ICD-10 criteria for PCS. SPSS (version 20) was used for all analyses. All analy- ses were conducted with p value <.01 to minimize Type I error. Results Demographics and Injury Characteristics The majority of participants (95.1%) were men. Participants were an average of 28.1 (SD = 7.3) years old with an average of 13.0 (SD = 1.9) years of education. Participants identified themselves as Caucasian (76.2%), Hispanic (13.4%), African American (4.9%), or another race (3.0%; 2.4% did not report a race/ethnicity). Participants were assessed an average of 4.1 months (SD = 5.0; Range = 0–24 months) following injury. There were no significant main effects for age, education, gender, or ethnicity across the three PVT groups. Time since injury did significantly differ across the groups, F(1,161) = 3.306, p = .039. Fisher’sLSD post hoc comparisons indicated that the Original PVT-Fail group (M = 6.42, SD = 4.99) was significantly further from injury than the Alternative PVT-Fail group (M = 3.70, SD = 4.38, p = .022) and the PVT-Pass group (M = 3.73, SD = 4.86, p = .016). Time since injury did not differ significantly between the PVT-Pass and Alternative PVT-Fail groups (p = .971). Neurocognitive Performance and Classification of Mild and Major Neurocognitive Disorder Descriptive statistics, group comparisons, and effect sizes (Cohen, 1988) for the RBANS subtests and index scores by PVT group are presented in Table 1. Data for the majority of the RBANS indices and individual subtests were not normally distributed; therefore, pair-wise comparisons using Mann–Whitney U tests were used. They indicated that both the Original PVT-Fail and Alternative PVT-Fail groups performed significantly worse on all RBANS indices compared to the PVT-Pass group. Participants in the Original PVT-Fail group performed worse than the Alternative PVT-Fail group on the Total Scale, Immediate Memory, and Delayed Memory Indices. Compared to the PVT-Pass group, the Original PVT-Fail group performed significantly worse on 11 of 12 individual subtests, with Figure Copy being the only exception. Participants in the Alternative PVT-Fail group performed significantly worse than the PVT-Pass group on 8 of 12 individual RBANS subtests (i.e., List Learning, Line Orientation, Semantic Fluency, Digit Span, Coding, List Recall, List Recognition, and Figure Recall). The percentage of participants that met DSM-5 Category A criteria for mild and major neurocognitive disorder, as well as the percentage of participants scoring one and two standard deviations below the mean of the normative sample on each of the RBANS indices, are presented in Table 2. Chi-square tests with Yates Continuity Correction indicated that Table 1. Descriptive statistics of RBANS index standard scores and subtest z scores by group 1. PVT-Pass 2. Alternative 3. Original d d d MWU group 1–3 1–2 2–3 (n = 85) PVT-Fail PVT-Fail differences at (n = 53) (n = 26) p < .01 M SD M SD M SD Immediate memory index 100.7 14.1 90.1 14.5 73.7 12.4 1.97 0.74 1.19 1 > 2 > 3 List learning 0.2 0.8 −0.4 1.0 −1.8 1.0 2.22 0.69 1.31 1 > 2 > 3 Story memory 0.1 1.1 −0.4 1.1 −1.6 1.3 1.51 0.44 1.01 1&2 > 3 Visuospatial/constructional Index 93.7 15.1 83.4 14.3 75.9 15.0 1.17 0.69 0.51 1 > 2&3 Figure copy −1.2 1.8 −1.8 1.9 −2.5 2.8 0.64 0.35 0.31 – Line orientation 0.5 0.7 0.2 0.7 −0.9 1.6 1.59 0.43 1.12 1 > 2 > 3 Language index 92.0 12.3 81.3 15.4 77.1 15.4 1.15 0.79 0.27 1 > 2&3 Picture naming 0.1 0.8 −0.5 1.6 −0.8 1.4 0.95 0.53 0.21 1 > 3 Semantic fluency −0.6 1.2 −1.4 1.2 −1.9 1.2 1.03 0.63 0.40 1 > 2&3 Attention index 91.5 14.2 73.9 17.5 65.3 19.4 1.69 1.14 0.47 1 > 2&3 Digit span −0.1 1.0 −1.0 1.1 −1.3 1.2 1.15 0.88 0.25 1 > 2&3 Coding −0.7 1.0 −1.5 1.2 −2.6 1.7 1.61 0.74 0.78 1 > 2 > 3 Delayed memory index 97.8 7.6 79.5 17.8 55.4 12.7 4.81 1.59 1.50 1 > 2 > 3 List recall 0.3 0.9 −0.7 1.1 −1.9 1.1 2.30 0.97 1.07 1 > 2 > 3 List recognition 0.0 0.7 −1.8 2.2 −5.8 3.4 4.31 1.38 1.52 1 > 2 > 3 Story recall −0.1 1.0 −0.6 1.1 −1.8 1.2 1.69 0.50 1.05 1&2 > 3 Figure recall −0.1 0.8 −1.1 1.3 −2.0 1.0 2.12 0.94 0.77 1 > 2 > 3 Total scale index 92.9 10.1 76.4 12.3 62.8 10.6 2.94 1.51 1.16 1 > 2 > 3 Note: PVT = performance validity test; MWU = Mann–Whitney U test. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 611 participants in the PVT-Pass group were significantly less likely than participants in the Alternative PVT-Fail group to meet cognitive testing criteria for mild neurocognitive disorder. (χ (1, n = 138) = 22.38, p < .001) or major neurocognitive disor- der (χ (1, n = 138) = 15.20, p < .001). Participants in the PVT-Pass group were also significantly less likely than partici- pants in the Original PVT-Fail group to meet cognitive testing criteria for mild neurocognitive disorder (χ (1, n = 111) = 30.52, p < .001) or major neurocognitive disorder (χ (1, n = 111) = 47.07, p < .001). The majority (84.6%) of the partici- pants in the Original PVT-Fail group, 64.2% of participants in the Alternative PVT-Fail group, and 22.4% of participants in the PVT-Pass group met DSM-5 Category A criteria for mild neurocognitive disorder. Similarly, 57.7% of the Original PVT- Fail group and 22.6% of the Alternative PVT-Fail group, but only 1.2% of participants in the PVT-Pass group met DSM-5 Category A criteria for major neurocognitive disorder. Additional analyses were conducted to evaluate how the proportion of participants meeting Category A criteria for mild and major neurocognitive disorder changed when alternative cutoffs were used instead of original cutoffs. As seen in Fig. 1, when alternative cutoffs were used instead of original cutoffs, 34 fewer participants met DSM-5 Category A criteria for mild neurocognitive disorder, changing the percentage of participants meeting cognitive testing criteria for mild neurocognitive dis- order from 32.3% to 11.6% and increasing the percentage of participants who failed PVTs from 15.9% to 48.2%. Table 2. Rates of meeting DSM-5 Category A criteria for mild and major neurocognitive disorder and PCS and rates of consistent performance ≥1SD or ≥2 SDs below the standardization sample mean on each cognitive domain 1. PVT-Pass % 2. Alternative 3. Original Chi-square (df = 2) Pair-wise PVT-Fail % PVT-Fail % Comparisons p < .01 Base rates of meeting cognitive criteria for neurocognitive disorder Met cognitive criteria for mild neurocognitive disorder 22.4 64.2 84.6 41.81 1 < 2&3 Met cognitive criteria for major neurocognitive disorder 1.2 22.6 57.7 46.63 1 < 2 < 3 Base rates of scores <1 SD below the mean Immediate memory (2/2 scores) 5.9 18.9 65.4 44.91 1&2 < 3 Visuospatial (2/2 scores) 3.5 1.9 19.2 11.42 — Language (2/2 scores) 2.4 9.4 19.2 8.88 1 < 3 Attention (2/2 scores) 10.6 39.6 57.7 28.03 1 < 2&3 Delayed memory (2/3 scores) 5.9 41.5 76.9 55.47 1 < 2 < 3 Base rates of scores <2 SD below the mean Immediate memory (2/2 scores) 0.0 3.8 30.8 33.66 1&2 < 3 Visuospatial (2/2 scores) 0.0 1.9 15.4 16.3 1 < 3 Language (2/2 scores) 0.0 3.8 15.4 13.37 1 < 3 Attention (2/2 scores) 0.0 7.5 26.9 23.15 1 < 3 Delayed memory (2/3 scores) 1.2 9.4 50.0 46.69 1&2 < 3 Note: PVT = performance validity test; PCS = postconcussional syndrome. Traditional Cutoffs Alternative Cutoffs Fail PVTs No Mild NCD Mild NCD No Major NCD Major NCD Fig. 1. Number and percentage of participants failing PVTs, meeting DSM-5 Category A criteria for minor neurocognitive disorder, or not meeting DSM-5 Category A criteria for minor neurocognitive disorder when different PVT cutoffs are used. Number of Participants Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 612 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 As shown in Table 2, the Original PVT-Fail group was more likely than the PVT-Pass group to perform ≥2 SD below the standardization sample mean in each of the five cognitive domains. The Original PVT-Fail group was also more likely than the PVT-Pass group to perform ≥1 SD below the standardization sample mean in all cognitive domains except for Visuospatial Functioning. The Alternative PVT-Fail group was more likely than the PVT-Pass group to perform ≥1 SD below the standardization mean in the domains of Attention and Delayed Memory. Overall, the memory indices were most strongly related to PVT performance. No participants in the PVT-Pass group performed ≥2 SDs below the mean of the normative sam- ple on the Immediate Memory Index, while 3.8% of participants in the Alternative PVT-Fail group and 30.8% of participants in the Original PVT-Fail group scored ≥2 SDs below the mean of the normative sample. On the Delayed Memory Index, 5.9% of participants in the PVT-Pass group, 41.5% of participants in the Alternative PVT-Fail group, and 76.9% of partici- pants in the Original PVT-Fail group scored ≥1 SD below the standardization sample mean. Similarly, 1.2% of participants in the PVT-Pass group, 9.4% of participants in the Alternative PVT-Fail group, and 50.0% of participants in the Original PVT- Fail group scored ≥2 SD below the standardization sample mean on the Delayed Memory Index. The number of cognitive domains with consistent low scores is displayed in Figs. 2 and 3. Only 1.2% of participants in the PVT-Pass scored ≥1 SD below the standardization sample mean in three or more cognitive domains, whereas 11.3% of parti- cipants in the Alternative PVT-Fail group and 46.1% of participants in the Original PVT-Fail group had scores that consis- tently were ≥1 SD below the mean in three or more cognitive domains. Only 1.2% of participants in the PVT-Pass group scored ≥1 SD below the mean in one or more cognitive domains, compared to 22.7% of participants in the Alternative PVT- Fail group and 57.6% of participants in the Original PVT-Fail group. Post-concussion Symptoms Descriptive statistics, group comparisons, and Cohen’s effect sizes (Cohen, 1988) for the 22 individual NSI symptoms and total score, by PVT group, are presented in Table 3. Participants in both PVT-Fail groups had higher NSI total scores com- pared to those in the PVT-Pass group. Participants in the Original PVT-Fail group reported significantly greater symptom severity than participants in the PVT-Pass group for 20 of the 22 individual items, with the largest effect sizes found with symptoms relating to headaches, sensitivity to light, and depression (all ds ≥1.21). Symptom endorsement rates were further compared with ICD-10 symptom reporting criteria for PCS. Chi-square analyses revealed significant differences between the groups; 65.4% of participants in the Original PVT-Fail group, 43.4% of partici- pants in the Alternative PVT-Fail group, and 21.7% of participants in the PVT-Pass group met ICD-10 PCS criteria based on symptoms endorsed as moderate or greater [χ (3, n = 162) = 18.42, p < .001]. Participants in the PVT-Pass group were 77.6 PVT-Pass Alternative PVT-Fail Original PVT-Fail 35.8 28.3 26.9 23.1 24.5 17.6 15.4 11.5 11.5 11.3 11.5 3.5 1.2 0.0 0.0 01 23 45 Number of Neurocognitive Domains Consistently ≥1 SD Below the Mean Fig. 2. Percentage of Participants with 0–5 cognitive domain scores ≥1 SD below the mean of the normative sample. Percentage of Participants Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 613 98.8 PVT-Pass Alternative PVT-Fail 77.4 Original PVT-Fail 42.3 18.9 19.2 15.4 11.5 7.7 3.8 3.8 1.2 0.0 0.0 0.0 0.0 01 23 45 Number of Neurocognitive Domains Consistently ≥2 SD Below the Mean Fig. 3. Percentage of Participants with 0–5 cognitive domain scores ≥2 SD below the mean of the normative sample. Table 3. Descriptive statistics for the individual Neurobehavioral Symptom Inventory symptoms by group 1. PVT-Pass 2. Alternative 3. Original d d d MWU group 1–3 1–2 2–3 (n = 85) PVT-Fail PVT-Fail differences at (n = 53) (n = 26) p < .01 M SD M SD M SD 1. Feeling dizzy 0.8 0.9 1.2 1.0 1.5 0.9 0.81 0.45 0.31 1 < 3 2. Loss of balance 0.8 0.9 1.1 0.9 1.5 1.0 0.72 0.33 0.38 1 < 3 3. Poor coordination, clumsy 0.7 0.8 1.1 0.9 1.4 1.0 0.85 0.44 0.39 1 < 3 4. Headaches 1.4 1.2 2.1 1.2 2.7 0.9 1.21 0.58 0.56 1 < 2&3 5. Nausea 0.5 0.8 0.7 1.0 1.3 1.1 0.84 0.22 0.55 1 < 3 6. Vision problems 1.0 1.1 0.9 0.9 1.7 1.3 0.60 0.03 0.71 — 7. Sensitivity to light 0.9 0.9 1.3 1.1 2.2 1.0 1.34 0.40 0.80 1&2 < 3 8. Hearing difficulty 1.0 1.0 1.1 1.0 1.6 1.1 0.60 0.12 0.48 1 < 3 9. Sensitivity to noise 1.0 1.1 1.3 1.2 1.7 1.2 0.66 0.26 0.35 1 < 3 10. Numbness or tingling 1.0 1.0 1.1 1.0 1.1 1.0 0.08 0.06 0.02 — 11. Change in taste and/or smell 0.3 0.7 0.5 0.7 1.0 1.3 0.77 0.20 0.53 1 < 3 12. Loss/increased appetite 0.6 1.0 1.2 1.0 1.8 1.4 1.10 0.56 0.54 1 < 2&3 13. Poor concentration 1.4 1.1 2.0 1.0 2.3 1.2 0.82 0.51 0.36 1 < 2&3 14. Forgetfulness/memory problems 1.6 1.2 2.2 1.0 2.8 0.9 1.11 0.59 0.59 1 < 2&3 15. Difficulty making decisions 0.8 0.9 1.3 1.1 1.8 1.3 0.99 0.51 0.45 1 < 2&3 16. Slowed thinking/getting organized 1.0 1.0 1.7 1.1 1.8 1.1 0.76 0.65 0.10 1 < 2&3 17. Fatigue/loss of energy/tires easily 1.0 1.0 1.7 1.3 2.0 1.1 1.01 0.64 0.28 1 < 2&3 18. Difficulty falling or staying asleep 1.9 1.3 2.2 1.3 2.9 1.3 0.75 0.22 0.53 1 < 3 19. Feeling anxious or tense 1.14 1.09 1.60 1.26 2.19 1.50 0.89 0.40 0.44 1 < 3 20. Feeling depressed or sad 0.61 0.85 1.08 1.16 1.81 1.39 1.23 0.48 0.59 1 < 3 21. Irritability, easily annoyed 1.35 1.14 1.79 1.18 2.65 1.20 1.13 0.38 0.73 1&2 < 3 22. Poor frustration tolerance/overwhelmed 1.00 1.06 1.53 1.12 2.19 1.50 1.03 0.49 0.53 1 < 2&3 Total Score 21.79 14.83 30.60 15.86 42.17 16.81 1.33 0.58 0.71 1 < 2&3 Note: PVT = performance validity test; MWU = Mann–Whitney U test. Percemtage of Participants Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 614 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 significantly less likely than participants in the Alternative PVT-Fail group and Original PVT-Fail group to meet ICD-10 symptom reporting criteria for PCS. When using original PVT criteria, 41 of the 136 participants who passed PVTs met symptom reporting criteria for PCS; however, when alternative PVT criteria were used, only 18 participants who passed PVTs met symptom reporting criteria for PCS. Discussion The current study illustrated a statistically significant and clinically meaningful relationship between PVT performance and the overall outcome from a neuropsychological evaluation. Participants who failed PVTs were more likely to meet ICD-10 symptom reporting criteria for PCS and DSM-5 Category A criteria for mild and major neurocognitive disorder. Participants who failed PVTs performed worse on most cognitive measures and reported worse PCS symptoms than participants who passed PVTs. The current study also found a PVT failure rate of 15.9% when using original cutoff scores; however, when using alternative cutoffs that have been validated in MTBI samples, the failure rate more than tripled to 48.2%. Additionally, when using alternative PVT cutoffs rather than original PVT cutoffs, the number of patients who passed PVTs and met DSM- 5 Category A criteria for mild neurocognitive disorder dropped from 53 (32.3%) to 19 (11.6%) and the number of participants who passed PVTs and met cognitive testing criteria for major neurocognitive disorder dropped from 13 (7.9%) to 1 (0.6%). Use of alternative cutoffs also decreased the number of participants who passed PVTs and met criteria for PCS from 41 (24.4%) to 18 (10.7%). Participants in both PVT failure groups performed significantly worse on cognitive measures and reported more severe post-concussion symptoms than participants who passed PVTs based on alternative cutoffs. Additionally, participants who failed PVTs based on original cutoffs also performed more poorly on many cognitive measures than participants who failed PVTs based on alternative cutoffs but not based on original cutoffs. These findings illustrate a gradient in the relationship between effort testing and clinical outcomes and supports past research that suggests performance validity lies on a continuum (Armistead-Jehle, Cooper, & Vanderploeg, 2016; Lippa, Agbayani, Hawes, Jokic, & Caroselli, 2014; Rogers, 2008; Walters, Berry, Rogers, Payne, & Granacher, 2009; Walters et al., 2008). Notably, Armistead-Jehle et al. (2016) previously demon- strated that in active duty service members with a history of possible mild TBI who passed two formal PVTs, performance validity test scores explained the greatest amount of variance (6.6%) in neurocognitive performance. Patients who performed perfectly on both PVTs performed significantly better (medium to large effect sizes) on RBANS Immediate Memory, Visuospatial, Language, Delayed Memory, and Total Score Indices. These findings are similar to the current study’s findings that found significantly better performance on all RBANS Indices (moderate to very large effect sizes) in patients who passed PVTs compared to patients who failed Alternative PVT cutoffs. The most striking differences in cognitive performance between the groups were noted on the memory indices. Only 1.2% of participants who passed Alternative PVT cutoffs scored ≥2 SDs below the mean of the normative sample in the Delayed Memory domain, while 9.4% of participants who failed PVTs based on alternative cutoffs and 50.0% of participants who failed PVTs based on original cutoffs scored at least two SDs below the mean of the normative sample. No participants who passed PVTs based on alternative cutoffs scored ≥2 SDs below the mean of the normative sample on the Immediate Memory Index, while 3.8% of participants who failed PVTs based on alternative cutoffs and 30.8% of participants who failed PVTs based on original cutoffs scored ≥2 SDs below the mean. A rather large number (22.4%) of participants who passed PVTs met the DSM-5 Category A criteria for mild neurocogni- tive disorder. Such a high rate of diagnosis in a healthy sample suggests that DSM-5 Category A criteria for mild neurocogni- tive disorder may be rather non-specific. The high rate of satisfying the DSM-5 Category A criteria for mild neurocognitive disorder in the present study could also be the result of several factors. First, this study was conducted with active duty service members, many of whom likely had comorbidities including depression, anxiety, PTSD, and chronic pain, which have all been shown to negatively affect cognitive performance (Drag, Spencer, Walker, Pangilinan, & Bieliauskas, 2012; Nicholson, Martelli, & Zasler, 2001; Vasterling et al., 2012; Verfaellie, Lafleche, Spiro, & Bousquet, 2014). Secondly, 30 participants were tested within a month of their injury, when cognitive impairments are more likely. Thirdly, financial incentives have been shown to be strongly related to neuropsychological performance (Binder & Rohling, 1996) and many of these evalua- tions were conducted in the context of a disability evaluation. Finally, the participants were all aware they were undergoing an evaluation related to their TBI. Diagnosis threat could have affected these participants’ performance, as this has been shown to negatively affect cognitive performance in MTBI patients (Blaine, Sullivan, & Edmed, 2013; Suhr & Gunstad, 2005). Additionally, it is acknowledged that this study defined mild and major neurocognitive disorder only by cognitive test performance. If the clinical interview and assessment of activities of daily living had been considered, it is possible these rates would be lower. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 615 The main limitation of the current study is that the RBANS EI, which was used to define both of the PVT failure groups, overlaps with the RBANS Digit Span and List Recognition subtests, as well as the Attention, Delayed Memory, and Total Scale Indices. We ultimately decided to keep these measures in our analyses, rather than removing or altering them, because they are all used clinically. It is important to note, however, that the differences between the groups on these measures may be artificially inflated. Additionally, because some participants were tested within a month of their injury, it is possible that they scored below the cutoff due to genuine cognitive sequelae and were therefore incorrectly identified as failing PVTs; the alternative cutoffs for the TOMM and RBANS EI used in this study have not been validated in mild TBI samples. Importantly, however, participants in the PVT-Fail groups were significantly further from their injury than participants in the PVT-Pass group. With most studies employing PVTs, it is possible that certain participants were misclassified, because PVT criteria are defined through a rather tautological method (Bigler, 2012). This is largely due to the fact that patients putting forth invalid test performances will not volunteer for validity studies and therefore there is no way to be certain that the crite- rion is accurate (Williams, 2011). Regardless of which cutoffs were used, there was a clear relationship between PVT failure and meeting the cognitive testing criteria for neurocognitive disorder and PCS diagnoses, suggesting that, even if the group classifications were not 100% accurate, the overall findings are valid. In conclusion, the current study illustrated that active duty service members with a history of MTBI who perform poorly on PVT measures are much more likely to fulfill the DSM-5 Category A criteria for neurocognitive disorder or ICD-10 symp- tom criteria for postconcussional syndrome. Presumably, many of these diagnoses would be inaccurate due to inadequate effort on testing and symptom exaggeration. Similar to previous findings supporting the use of updated cutoffs for the TOMM and RBANS EI in patients with mild TBI, the present findings demonstrate that these alternative cutoffs identify many more cases of possible invalid responding than the original cutoffs. Future research should continue to investigate the alternative PVT cutoffs examined here with mild TBI patients in non-military and/or non-disability settings. Additionally, continued research should investigate additional alternative cutoffs for use with mild TBI patients. By using only original cut- offs, clinicians and researchers may be missing some people with invalid performances. As always, PVT failure should be viewed in a clinical context. That is, failure of PVTs is indicative of invalid test data. Attribution of etiology may be more dif- ficult. Nonetheless, by taking test results at face value in one who fails PVTs, incorrect assumptions are likely to be made. Clinicians should consider adopting well-validated alternative PVT cutoffs for use with MTBI patients. Conflict of Interest None declared. Declaration and Disclosures The authors alone are responsible for the content and writing of the paper. Grant Iverson, Ph.D., has been reimbursed by the government, professional scientific bodies, and commercial organizations for discussing or presenting research relating to MTBI and sport-related concussion at meetings, scientific conferences, and symposiums. He has a clinical practice in forensic neuropsychology involving individuals who have sustained MTBIs. He has received honorariums for serving on research panels that provide scientific peer review of programs. The other authors report no competing or conflicts of interest. References Armistead-Jehle, P. (2010). Symptom validity test performance in U.S. veterans referred for evaluation of mild TBI. Applied Neuropsychology, 17,52–59. doi:10.1080/09084280903526182. Armistead-Jehle, P., & Buican, B. (2012). Evaluation context and Symptom Validity Test performances in a U.S. Military sample. Archives of Clinical Neuropsychology, 27, 828–839. doi:10.1093/arclin/acs086. Armistead-Jehle, P., Cooper, D. B., & Vanderploeg, R. D. (2016). The role of performance validity tests in the assessment of cogntivie functioning after mili- tary concussion: A replication and extension. Applied Neuropsychology: Adult, 23, 264–273. doi:10.1080/23279095.2015.1055564. Armistead-Jehle, P., Gervais, R. O., & Green, P. (2012). Memory complaints inventory and symptom validity test performance in a clinical sample. Archives of Clinical Neuropsychology, 27, 725–734. doi:10.1093/arclin/acs071. Armistead-Jehle, P., & Hansen, C. L. (2011). Comparison of the repeatable battery for the assessment of neuropsychological status effort index and stand- alone symptom validity tests in a military sample. Archives of Clinical Neuropsychology, 26, 592–601. doi:10.1093/arclin/acr049. Barker, M. D., Horner, M. D., & Bachman, D. L. (2010). Embedded indices of effort in the repeatable battery for the assessment of neuropsychological status (RBANS) in a geriatric sample. The Clinical Neuropsychologist, 24, 1064–1077. doi:10.1080/13854046.2010.486009. Bauer, L., O’Bryant, S. E., Lynch, J. K., McCaffrey, R. J., & Fisher, J. M. (2007). Examining the test of memory malingering trial 1 and word memory test immediate recognition as screening tools for insufficient effort. Assessment, 14, 215–222. doi:10.1177/1073191106297617. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 616 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 Bigler, E. D. (2012). Symptom validity testing, effort, and neuropsychological assessment. Journal of the International Neuropsychological Society, 18, 632–640. Binder, L. M., & Rohling, M. L. (1996). Money matters: A meta-analytic review of the effects of financial incentives on recovery after closed-head injury. American Journal of Psychiatry, 153,7–10. doi:10.1097/00001199-199608000-00012. Blaine, H., Sullivan, K. A., & Edmed, S. L. (2013). The effect of varied test instructions on neuropsychological performance following mild traumatic brain injury: An investigation of “diagnosis threat”. Journal of Neurotrauma, 30, 1405–1414. doi:10.1089/neu.2013.2865. Boyle, E., Cancelliere, C., Hartvigsen, J., Carroll, L. J., Holm, L. W., & Cassidy, J. D. (2014). Systematic review of prognosis after mild traumatic brain injury in the military: Results of the International Collaboration on Mild Traumatic Brain Injury Prognosis. Arcives of Physical Medicine and Rehabilitation, 95, S230–237. doi:10.1016/j.apmr.2013.08.297. Brooks, B. L., Iverson, G. L., & Holdnack, J. A. (2013). Understanding and using multivariate base rates with the WAIS-IV/WMS-IV. In Holdnack J. A., Drozdick L. W., Weiss L. G., & Iverson G. L. (Eds.), WAIS-IV/WMS-IV/ACS: Advanced clinical interpretation. San Diego, CA: Elsevier Science. Bush, S. S., Ruff, R. M., Troster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., et al. (2005). Symptom validity assessment: Practice issues and medical necessity NAN policy & planning committee. Archives of Clinical Neuropsychology, 20, 419–426. doi:10.1016/j.acn.2005.02.002. Carroll, L. J., Cassidy, J. D., Holm, L., Kraus, J., & Coronado, V. G. (2004). Methodological issues and research recommendations for mild traumatic brain injury: The WHO collaborating centre task force on mild traumatic brain injury. Journal of Rehabilitation Medicine, 36, 113–125. doi:http://dx.doi.org/ 10.1016/j.apmr.2013.04.026. Carter, K. R., Scott, J. G., Adams, R. L., & Linck, J. (2016). Base rate comparison of suboptimal scores on the RBANS effort scale and effort index in Parkinson’s disease. The Clinical Neuropsychologist, 30, 1118–1125. doi:10.1080/13854046.2016.1206145. Cicerone, K. D., & Karlmar, K. (1995). Persistent postconcussion syndrome: The structure of subjective complaints after mild traumatic brain injuury. Journal of Head Trauma Rehabilitation, 10 (3), 1–17. doi:10.1097/00001199-199506000-00002. Clark, A. L., Sorg, S. F., Schiehser, D. M., Bigler, E. D., Bondi, M. W., Jacobson, M. W., et al. (2016). White matter associations with performance validity testing in veterans with mild traumatic brain injury: The utility of biomarkers in complicated assessment. Journal of Head Trauma Rehabil, 31, 346–359. doi:10.1097/HTR.0000000000000183. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Constantinou, M., Bauer, L., Ashendorf, L., Fisher, J. M., & McCaffrey, R. J. (2005). Is poor performance on recognition memory effort measures indicative of generalized poor performance on neuropsychological tests? Archives of Clinical Neuropsychology, 20, 191–198. doi:10.1016/j.acn.2004.06.002. Crighton, A. H., Wygant, D. B., Holt, K. R., & Granacher, R. P. (2015). Embedded effort scales in the repeatable battery for the assessment of neuropsycho- logical status: Do they detect neurocognitive malingering? Archives of Clinical Neuropsychology, 30, 181–185. doi:10.1093/arclin/acv002. Denning, J. H. (2012). The efficiency and accuracy of the test of memory malingering trial 1, errors on the first 10 items of the test of memory malingering, and five embedded measures in predicting invalid test performance. Archives of Clinical Neuropsychology, 27, 417–432. doi:10.1093/arclin/acs044. Drag, L. L., Spencer, R. J., Walker, S. J., Pangilinan, P. H., & Bieliauskas, L. A. (2012). The contributions of self-reported injury characteristics and psychiat- ric symptoms to cognitive functioning in OEF/OIF veterans with mild traumatic brain injury. Journal of the International Neuropsychological Society, 18, 576–584. doi:10.1017/S1355617712000203. Dunham, K. J., Shadi, S., Sofko, C. A., Denney, R. L., & Calloway, J. (2014). Comparison of the repeatable battery for the assessment of neuropsychological status effort scale and effort index in a dementia sample. Archives of Clinical Neuropsychology, 29, 633–641. doi:10.1093/arclin/acu042. Green, P. (2007). The pervasive influence of effort on neuropsychological tests. Physical Medicine and Rehabilitation Clinics of North America, 18,43–68. doi:10.1016/j.pmr.2006.11.002. Green, P., Rohling, M. L., Lees-Haley, P. R., & Allen, L. M., 3rd (2001). Effort has a greater effect on test scores than severe brain injury in compensation claimants. Brain Injury, 15, 1045–1060. doi:10.1080/02699050110088254. Greve, K. W., & Bianchini, K. J. (2004). Setting empirical cut-offs on psychometric indicators of negative response bias: A methodological commentary with recommendations. Archives of Clinical Neuropsychology, 19, 533–541. doi:10.1016/j.acn.2003.08.002. Greve, K. W., & Bianchini, K. J. (2007). Detection of cognitive malingering with tests of executive function. In Larrabee G. J. (Ed.), Assessment of malin- gered neuropsychological deficits (pp. 171–225). New York: Oxford University Press. Greve, K. W., Bianchini, K. J., & Doane, B. M. (2006). Classification accuracy of the test of memory malingering in traumatic brain injury: Results of a known-groups analysis. Journal of Clinical and Experimental Neuropsychology, 28, 1176–1190. doi:10.1080/13803390500263550. Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., & Millis, S. R. (2009). American academy of clinical neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23, 1093–1129. doi:10.1080/ Hoge, C. W., McGurk, D., Thomas, J. L., Cox, A. L., Engel, C. C., & Castro, C. A. (2008). Mild traumatic brain injury in U.S. Soldiers returning from Iraq. New England Journal of Medicine, 358, 453–463. doi:10.1056/NEJMoa072972. Hook, J. N., Marquine, M. J., & Hoelzle, J. B. (2009). Repeatable battery for the assessment of neuropsychological status effort index performance in a medi- cally ill geriatric sample. Archives of Clinical Neuropsychology, 24, 231–235. doi:10.1093/arclin/acp026. Iverson, G. L. (2006). Ethical issues associated with the assessment of exaggeration, poor effort, and malingering. Applied Neuropsychology, 13,77–90. doi:10.1207/s15324826an1302_3. Iverson, G. L., Gardner, A. J., Terry, D. P., Ponsford, J. L., Sills, A. K., Broshek, D. K., et al. (2017). Predictors of clinical recovery from concussion: A sys- tematic review. British Journal of Sports Medicine, 51, 941–948. doi:10.1136/bjsports-2017-097729. Iverson, G. L., Lange, R. T., Brooks, B. L., & Rennison, V. L. (2010). “Good old days” bias following mild traumatic brain injury. The Clinical Neuropsychologist, 24,17–37. doi:10.1080/13854040903190797. Iverson, G. L., Zasler, N. D., & Lange, R. T. (2006). Post-concussive disorder. In Zasler N. D., Katz D. I., & Zafonte R. D. (Eds.), Brain injury medicine: Principles and practice (pp. 373–405). New York: Demos Medical Publishing. Jones, A. (2013). Test of memory malingering: Cutoff scores for psychometrically defined malingering groups in a military sample. The Clinical Neuropsychologist, 27, 1043–1059. doi:10.1080/13854046.2013.804949. Jones, A. (2016). Repeatable battery for the assessment of neuropsychological status: Effort index cutoff scores for psychometrically defined malingering groups in a military sample. Archives of Clinical Neuropsychology, 31, 273–283. doi:10.1093/arclin/acw006. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 617 Kirkwood, M. W., Peterson, R. L., Connery, A. K., Baker, D. A., & Grubenhoff, J. A. (2014). Postconcussive symptom exaggeration after pediatric mild trau- matic brain injury. Pediatrics, 133, 643–650. doi:10.1542/peds.2013-3195. Lagarde, E., Salmi, L. R., Holm, L. W., Contrand, B., Masson, F., Ribereau-Gayon, R., et al. (2014). Association of symptoms following mild traumatic brain injury with posttraumatic stress disorder vs. postconcussion syndrome. JAMA Psychiatry, 71, 1032–1040. doi:10.1001/jamapsychiatry.2014.666. Lange, R. T., Iverson, G. L., Brooks, B. L., & Rennison, V. L. (2010). Influence of poor effort on self-reported symptoms and neurocognitive test perfor- mance following mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 32, 961–972. doi:10.1080/13803391003645657. Lange, R. T., Pancholi, S., Bhagwat, A., Anderson-Barnes, V., & French, L. (2012). Influence of Poor Effort on Neuropsychological Test Performance in Military Personnel following Mild Traumatic Brain Injury. Journal of Clinical and Experimental Neuropsychology, 34, 453–466. doi:10.1080/13803395. 2011.648175. Larrabee, G. J. (2012). Performance validity and symptom validity in neuropsychological assessment. Journal of the International Neuropsychological Society, 18, 625–630. Lippa, S. M., Agbayani, K. A., Hawes, S., Jokic, E., & Caroselli, J. S. (2014). Effort in acute traumatic brain injury: considering more than pass/fail. Rehabilitation Psychology, 59, 306–312. doi:10.1037/a0037217. Lippa, S. M., Hawes, S., Jokic, E., & Caroselli, J. S. (2013). Sensitivity of the RBANS to acute traumatic brain injury and length of post-traumatic amnesia. Brain Injury, 27, 689–695. doi:10.3109/02699052.2013.771793. Lippa, S. M., Lange, R. T., Bhagwat, A., & French, L. M. (2017). Clinical utility of embedded performance validity tests on the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) following mild traumatic brain injury. Applied Neuropsychol Adult, 24,73–80. doi:10.1080/ 23279095.2015.1100617. Lippa, S. M., Pastorek, N. J., Romesser, J., Linck, J., Sim, A. H., Wisdom, N. M., et al. (2014). Ecological validity of performance validity testing. Archives of Clinical Neuropsychology, 29, 236–244. doi:10.1093/arclin/acu002. MacGregor, A. J., Shaffer, R. A., Dougherty, A. L., Galarneau, M. R., Raman, R., Baker, D. G., et al. (2010). Prevalence and psychological correlates of trau- matic brain injury in operation Iraqi freedom. The Journal of Head Trauma Rehabilitation, 25 (1), 1–8. doi:10.1097/HTR.0b013e3181c2993d. Managment of Concussion/mTBI Working Group. (2009). VA/DoD clinical practice guideline for managment of concussion/mild traumatic brain injury. Journal of Rehabilitation Research and Development, 46, CP1–68. http://www.rehab.research.va.gov/jour/09/46/6/pdf/cpg.pdf. McCormick, C. L., Yoash-Gantz, R. E., McDonald, S. D., Campbell, T. C., & Tupler, L. A. (2013). Performance on the Green Word Memory Test Following Operation Enduring Freedom/Operation Iraqi Freedom-Era Military Service: Test Failure is Related to Evaluation Context. Archives of Clinical Neuropsychology, 28, 808–823. doi:10.1093/arclin/act050. McCrea, M., Pliskin, N., Barth, J., Cox, D., Fink, J., French, L., et al. (2008). Official position of the military TBI task force on the role of neuropsychology and rehabilitation psychology in the evaluation, management, and research of military veterans with traumatic brain injury. The Clinical Neuropsychologist, 22,10–26. doi:10.1080/13854040701760981. McKay, C., Casey, J. E., Wertheimer, J., & Fichtenberg, N. L. (2007). Reliability and validity of the RBANS in a traumatic brain injured sample. Archives of Clinical Neuropsychology, 22,91–98. doi:10.1016/j.acn.2006.11.003. McKay, C., Wertheimer, J. C., Fichtenberg, N. L., & Casey, J. E. (2008). The repeatable battery for the assessment of neuropsychological status (RBANS): Clinical utility in a traumatic brain injury sample. The Clinical Neuropsychologist, 22, 228–241. doi:10.1080/13854040701260370. Mild Traumatic Brain Injury Committee, American Congress of Rehabilitation Medicine, & Head Injury Interdisciplinary Special Interest Group. (1993). Definition of mild traumatic brain injury. Journal of Head Trauma Rehabilitation, 8,86–87. https://www.acrm.org/wp-content/uploads/pdf/TBIDef_ English_10-10.pdf. Moore, R. C., Davine, T., Harmell, A. L., Cardenas, V., Palmer, B. W., & Mausbach, B. T. (2013). Using the repeatable battery for the assessment of neuro- psychological status (RBANS) effort index to predict treatment group attendance in patients with schizophrenia. Journal of the International Neuropsychological Society, 19, 198–205. doi:10.1017/S1355617712001221. Morra, L. F., Gold, J. M., Sullivan, S. K., & Strauss, G. P. (2015). Predictors of neuropsychological effort test performance in schizophrenia. Schizophrenia Research, 162, 205–210. doi:10.1016/j.schres.2014.12.033. Mossman, D., Wygant, D. B., Gervais, R. O., & Hart, K. J. (2017). ). Trial 1 versus Trial 2 of the test of memory malingering: Evaluating accuracy without a “Gold Standard”. Psychological Assessment, Advance online publication. doi:10.1037/pas0000449. Nelson, N. W., Hoelzle, J. B., McGuire, K. A., Ferrier-Auerbach, A. G., Charlesworth, M. J., & Sponheim, S. R. (2010). Evaluation context impacts neuro- psychological performance of OEF/OIF veteranswith reported combat-related concussion. Archives of Clinical Neuropsychology, 25, 713–723. doi:10. 1093/arclin/acq075. Nicholson, K., Martelli, M. F., & Zasler, N. D. (2001). Does pain confound interpretation of neuropsychological test results? NeuroRehabilitation, 16, 225–230. http://www.ncbi.nlm.nih.gov/pubmed/11790908. Pachet, A. K. (2007). Construct validity of the Repeatable Battery of Neuropsychological Status (RBANS) with acquired brain injury patients. The Clinical Neuropsychologist, 21, 286–293. doi:10.1080/13854040500376823. Paulson, D., Horner, M. D., & Bachman, D. (2015). A comparison of four embedded validity indices for the RBANS in a memory disorders clinic. Archives of Clinical Neuropsychology, 30, 207–216. doi:10.1093/arclin/acv009. Proto, D. A., Pastorek, N. J., Miller, B. I., Romesser, J. M., Sim, A. H., & Linck, J. F. (2014). The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms. Archives of Clinical Neuropsychology, 29, 614–624. doi:10.1093/ arclin/acu044. Randolph, C., Tierney, M. C., Mohr, E., & Chase, T. N. (1998). The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): Preliminary clinical validity. Journal of Clinical and Experimental Neuropsychology, 20, 310–319. doi:10.1076/jcen.20.3.310.823. Rogers, R. (2008). Clinical assessment of malingering and deception (3rd ed.). New York: Guilford. Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology, 22, 213–223. doi:10.1016/j.acn.2006.12.004. Sieck, B. C., Smith, M. M., Duff, K., Paulsen, J. S., & Beglinger, L. J. (2013). Symptom validity test performance in the Huntington Disease Clinic. Archives of Clinical Neuropsychology, 28, 135–143. doi:10.1093/arclin/acs109. Downloaded from https://academic.oup.com/acn/article/33/5/606/4560639 by DeepDyve user on 13 July 2022 618 S.M. Lippa et al. / Archives of Clinical Neuropsychology 33 (2018); 606–618 Silverberg, N. D., Wertheimer, J. C., & Fichtenberg, N. L. (2007). An effort index for the Repeatable Battery For The Assessment Of Neuropsychological Status (RBANS). The Clinical Neuropsychologist, 21, 841–854. doi:10.1080/13854040600850958. Solomon, G. S., Kuhn, A. W., & Zuckerman, S. L. (2016). Depression as a modifying factor in sport-related concussion: A critical review of the literature. The Physician and Sportsmedicine, 44,14–19. doi:10.1080/00913847.2016.1121091. Stenclik, J. H., Miele, A. S., Silk-Eglit, G., Lynch, J. K., & McCaffrey, R. J. (2013). Can the sensitivity and specificity of the TOMM be increased with differ- ential cutoff scores? Applied Neuropsychology Adult, advanced Online Publication. doi:10.1080/09084282.2012.704603. Stevens, A., Friedel, E., Mehren, G., & Merten, T. (2008). Malingering and uncooperativeness in psychiatric and psychological assessment: Prevalence and effects in a German sample of claimants. Psychiatry Research, 157, 191–200. doi:10.1016/j.psychres.2007.01.003. Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (3rd ed.). Oxford; New York: Oxford University Press. Suhr, J. A., & Gunstad, J. (2005). Further exploration of the effect of “diagnosis threat” on cognitive performance in individuals with mild head injury. Journal of the International Neuropsychological Society, 11,23–29. doi:10.1017/S1355617705050010. Terrio, H., Brenner, L. A., Ivins, B. J., Cho, J. M., Helmick, K., Schwab, K., et al. (2009). Traumatic brain injury screening: Preliminary findings in a US Army Brigade Combat Team. The Journal of Head Trauma Rehabilitation, 24,14–23. doi:10.1097/HTR.0b013e31819581d8. Tombaugh, T. (1996). TOMM: Test of memory malingering. North Tonawanda, NY: Multi-Health Systems Inc. Tsanadis, J., Montoya, E., Hanks, R. A., Millis, S. R., Fichtenberg, N. L., & Axelrod, B. N. (2008). Brain injury severity, litigation status, and self-report of postconcussive symptoms. The Clinical Neuropsychologist, 22, 1080–1092. doi:10.1080/13854040701796928. Vasterling, J. J., Brailey, K., Proctor, S. P., Kane, R., Heeren, T., & Franz, M. (2012). Neuropsychological outcomes of mild traumatic brain injury, post- traumatic stress disorder and depression in Iraq-deployed US Army soldiers. British Journal of Psychiatry, 201, 186–192. doi:10.1192/bjp.bp.111.096461. Verfaellie, M., Lafleche, G., Spiro, A., & Bousquet, K. (2014). Neuropsychological outcomes in OEF/OIF veterans with self-report of blast exposure: Associations with mental health, but not MTBI. Neuropsychology, 28, 337–346. doi:10.1037/neu0000027. Walters, G. D., Berry, D. T., Rogers, R., Payne, J. W., & Granacher, , R. P., , Jr. (2009). Feigned neurocognitive deficit: Taxon or dimension? Journal of Clinical and Experimental Neuropsychology, 31, 584–593. doi:10.1080/13803390802363728. Walters, G. D., Rogers, R., Berry, D. T., Miller, H. A., Duncan, S. A., McCusker, P. J., et al. (2008). Malingering as a categorical or dimensional construct: The latent structure of feigned psychopathology as measured by the SIRS and MMPI-2. Psychological Assessment, 20, 238–247. doi:10.1037/1040-3590. 20.3.238. Whitney, K. A., Shepard, P. H., Williams, A. L., Davis, J. J., & Adams, K. M. (2009). The medical symptom validity test in the evaluation of operation Iraqi freedom/operation enduring freedom soldiers: A preliminary study. Archives of Clinical Neuropsychology, 24, 145–152. doi:10.1093/arclin/acp020. Williams, J. M. (2011). The malingering factor. Archives of Clinical Neuropsychology, 26, 280–285. doi:10.1093/arclin/acr009. World Health Organization. (1992). The ICD-10 classification of mental and behavioural disorders: Clinical descriptions and diagnostic guidelines. Geneva: World Health Organization. Young, J. C., Baughman, B. C., & Roper, B. L. (2012). Validation of the repeatable battery for the assessment of neuropsychological status—effort index in a veteran sample. The Clinical Neuropsychologist, 26, 688–699. doi:10.1080/13854046.2012.679624. Zimmer, A., Heyanka, D., & Proctor-Weber, Z. (2017). Concordance validity of PVTs in a sample of veterans referred for mild TBI. Applied Neuropsychology Adult, Advanced Online Publication,1–10. doi:10.1080/23279095.2017.1319835.

Journal

Archives of Clinical NeuropsychologyOxford University Press

Published: Aug 1, 2018

Keywords: post-concussion syndrome; traumatic brain injury, mild; major neurocognitive disorder; neurocognitive disorders; brain concussion; diagnostic and statistical manual; test of memory malingering

References