The Severe Impairment Profile: A Conceptual Shift

The Severe Impairment Profile: A Conceptual Shift Abstract Objective The current study sought to evaluate and replicate the severe impairment profile (SIP) of the Word Memory Test (WMT) in patients referred for dementia evaluations. Method The sample consisted of 125 patients referred for a neuropsychological evaluation at a large Veterans Affairs Medical Center. Patients were assigned a Clinical Dementia Rating (CDR) by blind raters, and were classified according to their performance on performance validity testing. Subsequent chart reviews were conducted to help in more accurately determining the presence of severe memory impairment likely due to an underlying dementing process versus poor effort/task engagement. Results In our sample, 51% of patients failed easy WMT subtests and 93% of these patients obtained the SIP. The rates of failure on these easy subtests generally coincided with both more severely impaired CDR ratings, as well as more impaired delayed memory composite scores. Upon chart review, it was determined that there were likely a significant portion of classification errors using the SIP, with a positive posttest probability of impairment based on having the SIP being 65% as opposed to 28% for a negative result. Conclusions Our findings suggest that the SIP does not appear to function effectively in a mixed dementia sample where there is increased potential for secondary gain. Additional concern is expressed regarding the overall likelihood of obtaining the SIP and subsequent inferential decisions related to obtaining an SIP. Future research should examine more optimal cut scores or alternative methods for more accurately classifying patients in different clinical contexts and patterns of impairment. Dementia, Malingering/Symptom Validity Testing, Assessment, Elderly/geriatrics/aging, Learning and memory Introduction Performance validity tests (PVTs) have been proposed as necessary components of neuropsychological evaluations both by the American Academy of Clinical Neuropsychology (Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009) and the National Academy of Neuropsychology (Bush et al., 2005). Strong advocacy for PVT inclusion by each member association is predicated on an extensive body of literature demonstrating their specificity in differentiating between honest responders and dissimulators (Vickery, Berry, Inman, Harris, & Orey, 2001). Much of this research has been conducted among individuals with a history of mild traumatic brain injury in settings with identifiable secondary gain, including litigation for injury (e.g., Green, Flaro, & Courtney, 2009; Green, Lees-Haley, & Allen, 2002; Greve, Ord, Curtis, Bianchini, & Brennan, 2008) or compensable military service-related injuries (Armistead-Jehle, 2010). However, research in the past decade has increasingly focused on the classification accuracy of PVTs in non-TBI populations where genuine impairments in cognition are not uncommon (Dean, Victor, Boone, Philpott, & Hess, 2009; Kiewel, Wisdom, Bradshaw, Pastorek, & Strutt, 2012; Whearty, Allen, Lee, & Strauss, 2015). PVT performance in older adults diagnosed with mild cognitive impairment (MCI) or dementia has received much attention as many PVTs approximate the format of a memory task. Using standard cut-scores published in the manual, the Test of Memory Malingering (TOMM) has shown failure rates ranging from 27% to 76% in these populations (Teichner & Wagner, 2004; Tombaugh, 1996). An unknown portion of these “failures” are likely false-positives as suggested by the dose–response relationship between failure rates and severity of dementia as indexed by mental status scores as well as diagnostic categorizations from objective testing instruments (Clark et al., 2012). A recent review paper compellingly demonstrates that many commonly used standalone and embedded PVTs have unacceptable rates of falsely identifying poor effort in individuals diagnosed with dementia (Dean et al., 2009). As a result of this decreased specificity, Dean et al. suggest alternative methods for evaluating effort in populations with dementia such as adjusting cut scores to minimize Type I error. Borne from this need, Green (2004) developed a dementia profile [often referenced as the Genuine Memory Impairment Profile (GMIP) or the Severe Impairment Profile (SIP)] on the Medical Symptom Validity Test (MSVT) and the Nonverbal Medical Symptom Validity Test (NV-MSVT). He suggested that a 20-point difference in performance between the average of easy and hard subtests when the measure was failed according to published cutoffs alongside certain recall indices not falling below free recall of test stimuli adequately identified those test takers with legitimate memory impairments. Howe and Loring (2009) independently examined this dementia profile (heretofore referred to as SIP) in a clinical sample and found a false positive classification rate of 5.8%. The patients who did not conform to the SIP in this sample failed to do so due to their free recall scores being equal or greater than recognition recall indices. Singhal, Green, Ashaye, and Gill (2009) also found SIP in the MSVT to have high diagnostic accuracy as it correctly classified everyone in their sample of patients institutionalized with severe dementia. More recently, Green, Montijo, and Brockhaus (2011) examined the SIP in a sample of Spanish speaking patients referred to a memory disorders clinic for dementia evaluation. Classification accuracy of both the MSVT and the Word Memory Test (WMT; Green, 2003) was examined using an easy-hard subtest difference score of at least 30 points (based on previously collected data [Green, 2003]). Dementia severity was primarily classified using the Clinical Dementia Rating (CDR) scale (Morris et al., 1997; Morris, 1993) into unimpaired, possible mild cognitive impairment, and dementia groups, although it should be noted that neuropsychological test data was used to reclassify some patients after completion of CDR rating. In this sample, all patients in the unimpaired group (N = 19) passed the easy WMT subtests, which was noted to be similar to a healthy adult sample collected as part of the WMT normative data (Green, 2003). In contrast, the possible MCI group and dementia groups failed the easy WMT subtests at rates of 21.6% and 63%, respectively. When the profiles of those patients failing the easy subtests were examined, it was found that every case exhibited at least a 30-point difference between the average of the easy (IR, DR, and C) and hard subtests (MC, PAR, and FR), indicative of the theoretical SIP. The authors concluded that use of the SIP minimizes or altogether eliminates false positive errors. Rienstra, Twennaar, and Schmand (2013) similarly demonstrated that individuals with the SIP on the WMT tended to decline at greater rates on objective testing at 2-year follow-up than those deemed to have non-credible performances. While we agree with the general methodology utilized by Green and colleagues (2011), concerns remain regarding the widespread application of the WMT SIP. For example, the sample from Green et al. was drawn from a general memory disorders clinic where there did not appear to be overt concern with regard to external incentives and the delayed recall interval was also reduced in an attempt to reduce time demands. Both of these issues may limit generalizability to other settings. In contrast, Axelrod and Schutte (2010) demonstrated the SIP of tests such as the MSVT did not adequately discriminate “poor effort” from genuine dementia when the profile was used in conjunction with objective neuropsychological testing in a patient population where there is access to compensation incentives. Although the influence of compensation incentives was not a primary focus of their study, such pressures do exist within the VA system (e.g., see Bush & Morgan, 2012; Young, Kearns, & Roper, 2011). Additionally, Green et al. utilized the CDR, a system which relies on self-report and clinical information collected during a single time-point, as the primary means to determine dementia severity, rather than monitoring the patients’ trajectories over several years to identify possible conversion from MCI to dementia. As demonstrated by Rienstra and colleagues (2013), longitudinal tracking of patients and outcomes (i.e., stability, conversion, and reversion) can better establish the predictive validity of the SIP. Lastly, the underlying rationale behind the SIP has been questioned as being circular as it requires a determination to be made about the possibility of dementia prior to administering the test (Axelrod & Schutte, 2010). Using a similar methodology, the current study sought to replicate and extend the findings of the Green and colleagues (2011) WMT SIP among veterans referred for a dementia evaluation at a large Veterans Affairs (VA) hospital, where secondary gain can be pursued in the form of service connected disability benefits (Bush & Morgan, 2012; Young et al., 2011). In addition to WMT performance, this study also tracked patient clinical course to better establish the predictive validity of the WMT SIP. Although the overarching goal of the current study was to further evaluate the diagnostic accuracy of the WMT SIP using several sources of information to better classify credible versus possible non-credible performance at the time of testing, a secondary goal of this study was to further consider the underlying logic and diagnostic accuracy of the SIP. That is, while the WMT provides information regarding the presence of “poor effort” versus “no evidence for poor effort”, the WMT SIP is conceptually different and implies an inferential dichotomy of “poor effort” versus “cognitive impairment.” This study sought to further examine the accuracy associated with using a the WMT SIP as an indicator of cognitive impairment. Participants and Procedures The sample was a retrospective review of patients referred for an outpatient neuropsychological evaluation through the Neurology department at a large Veterans Affairs Medical Center for evaluation of cognitive functioning for suspected dementia from 2010 to 2013. Patients were excluded from the study if they were under 55 years of age, did not receive the WMT as part of their comprehensive cognitive evaluation, or if they met diagnostic criteria for a confusional state/delirium, resulting in a final sample which consisted of 125 patients. See Table 1 for demographic information related to the overall sample. Table 1. Demographics and CDR’s of sample groups   Unimpaired 0 s and 0.5 s (N = 45)  Impaired 0.5 s (N = 45)  CDR 1 and 2 (N = 35)  Overall total (N = 125)  Age  66.43 (6.27)  68.67 (6.77)  71.74 (8.86)a  68.69 (7.47)  Years of education  13.49 (2.91)  12.65 (2.60)  12.20 (2.45)  12.84 (2.71)  CDR rating  0.48 (0.16)  0.50 (0.00)  1.31 (0.47)b  0.71 (0.46)  CDR SOB  1.36 (1.11)  2.02 (1.43)  7.47 (2.12)b  3.30 (3.05)    Unimpaired 0 s and 0.5 s (N = 45)  Impaired 0.5 s (N = 45)  CDR 1 and 2 (N = 35)  Overall total (N = 125)  Age  66.43 (6.27)  68.67 (6.77)  71.74 (8.86)a  68.69 (7.47)  Years of education  13.49 (2.91)  12.65 (2.60)  12.20 (2.45)  12.84 (2.71)  CDR rating  0.48 (0.16)  0.50 (0.00)  1.31 (0.47)b  0.71 (0.46)  CDR SOB  1.36 (1.11)  2.02 (1.43)  7.47 (2.12)b  3.30 (3.05)  aSignificantly different than unimpaired group. bSignificantly different than unimpaired and CDR 1 and 2 groups. All 125 cases were retrospectively classified using the Clinical Dementia Rating Scale (CDR) by a neuropsychologist with formal training in CDR rating (BM), and a board-certified neurologist (VP), who arrived at consensus ratings. The CDR reviewers were provided with a de-identified folder for each patient with relevant progress notes leading up to, and including, the initial neurology evaluation, and MMSE which was conducted during a subsequent neuropsychological evaluation. The CDR reviewers did not review any patient neuropsychological test data or any medical data following the initial neurology evaluation. The neuropsychologist reviewing the de-identified patient test data was not involved in the neuropsychological assessment, whereas the reviewing neurologist was involved in the work-up of approximately 30% of the cases (but remained blinded to patient folders). After the initial clinical review by the CDR raters, five patients obtained a CDR of 0, 85 patients obtained a CDR of 0.5, 24 obtained a CDR of 1 and 11 obtained a CDR of 2. Following initial CDR assessment, and in a manner similar to Green and colleagues (2011), CDR ratings groups were further characterized according to patient interview and neuropsychological data. Patients were characterized into an “Unimpaired” group (n = 45) if they had a CDR of 0 or 0.5 in addition to objective memory performance that fell within normal limits, as defined by delayed free recall memory demographically adjusted scores falling within 1.5 SD of population average performance. Patients were characterized into a “Possible MCI” group (n = 45) if they had a CDR rating of 0.5 and were found to have delayed memory performance that fell below 1.5 SD of the demographically adjusted population average. Two additional dementia groupings were included, CDR = 1 and CDR = 2, to allow for examination of dose–response relationships between PVT and neuropsychological test performance with increasing dementia severity. These dementia groups were categorized based on CDR scores of 1 (n = 24) and 2 (n = 11). See Table 2 for a breakdown of this information. Table 2. WMT index scores for all subjects by CDR rating   CDR 0 (N = 5)  CDR 0.5 (N = 85)  Unimpaired (CDR 0 + 0.5, N = 45)  Impaired 0.5 (N = 45)  CDR 1 (N = 24)  CDR 2 (N = 11)  WMT IR  97.00 (9.58)  85.88 (14.24)  92.50 (9.58)a  80.50 (15.35)  73.02 (12.09)  74.32 (19.14)  WMT DR  98.50 (2.24)  88.04 (13.26)  93.00 (9.10)a  84.23 (15.02)b  71.04 (16.27)  71.59 (20.84)  WMT C  96.50 (2.85)  82.91 (14.13)  88.72 (12.15)a  78.61 (14.21)b  67.60 (11.94)  70.91 (15.70)  WMT MC  89.00 (14.32)  58.51 (24.70)  73.22 (20.40)a  49.63 (22.70)b  26.04 (18.77)  32.95 (21.12)  WMT PAR  87.00 (9.75)  56.78 (22.68)  69.22 (20.17)a  47.27 (20.91)b  28.30 (17.68)  32.73 (22.29)  WMT FR  39.00 (13.99)  26.66 (12.85)  33.67 (11.69)a  20.76 (11.29)b  11.07 (10.71)  12.00 (10.92)  WMT Pass  5 (100%)  50 (58.8%)  37 (82.2%)  18 (40.0%)  3 (12.5%)  3 (27.3%)  CDR SOB  0  1.78 (1.28)  1.42 (1.10)  1.93 (1.46)  6.40 (1.55)  9.82 (0.98)  Delayed Memory Z  −0.27 (1.36)  −1.06 (1.11)  −0.17 (0.55)  −1.92 (0.79)  −2.43 (0.83)  −2.11 (1.28)    CDR 0 (N = 5)  CDR 0.5 (N = 85)  Unimpaired (CDR 0 + 0.5, N = 45)  Impaired 0.5 (N = 45)  CDR 1 (N = 24)  CDR 2 (N = 11)  WMT IR  97.00 (9.58)  85.88 (14.24)  92.50 (9.58)a  80.50 (15.35)  73.02 (12.09)  74.32 (19.14)  WMT DR  98.50 (2.24)  88.04 (13.26)  93.00 (9.10)a  84.23 (15.02)b  71.04 (16.27)  71.59 (20.84)  WMT C  96.50 (2.85)  82.91 (14.13)  88.72 (12.15)a  78.61 (14.21)b  67.60 (11.94)  70.91 (15.70)  WMT MC  89.00 (14.32)  58.51 (24.70)  73.22 (20.40)a  49.63 (22.70)b  26.04 (18.77)  32.95 (21.12)  WMT PAR  87.00 (9.75)  56.78 (22.68)  69.22 (20.17)a  47.27 (20.91)b  28.30 (17.68)  32.73 (22.29)  WMT FR  39.00 (13.99)  26.66 (12.85)  33.67 (11.69)a  20.76 (11.29)b  11.07 (10.71)  12.00 (10.92)  WMT Pass  5 (100%)  50 (58.8%)  37 (82.2%)  18 (40.0%)  3 (12.5%)  3 (27.3%)  CDR SOB  0  1.78 (1.28)  1.42 (1.10)  1.93 (1.46)  6.40 (1.55)  9.82 (0.98)  Delayed Memory Z  −0.27 (1.36)  −1.06 (1.11)  −0.17 (0.55)  −1.92 (0.79)  −2.43 (0.83)  −2.11 (1.28)  aSignificantly different than impaired 0.5, CDR 1, and CDR 2. bSignificantly different than unimpaired and CDR 1. The computerized WMT was administered to all participants in English. Administration was completed using manual guidelines, with the administrator leaving the room during the presentation and testing portions of the WMT, although the examiner checked with the patient several times to ensure the patient understood and was following directions during the task. While Green and colleagues (2011) halved the delayed recall time to 15 for participants, the current study used the manual based 30 min delay between immediate and delayed recall. In addition to the administration of the WMT, each patient was clinically interviewed and participated in comprehensive neuropsychological testing as part of their clinical visit. Most patients completed a standard neuropsychological battery which included the California Verbal Learning Test – second edition (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) and the Logical Memory subtests from the Wechsler Memory Scales – fourth edition (WMS-IV; Wechsler, 2009). During the clinical interview, a small portion of patients deemed to be significantly impaired, or unable to tolerate extensive testing, completed a shorter test battery which included memory measures from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS; Randolph, 1998) or the Hopkins Verbal Learning Test – Revised (HVLT-R; Brandt & Benedict, 2001). A comprehensive chart review was completed to aid in follow-up analyses examining possible suboptimal performance on testing. This chart review was conducted by licensed neuropsychologists (RC and BM) as well as a second-year fellow in clinical neuropsychology (JR). This chart review included examining the patient’s medical chart to determine the likelihood that their PVT failure was likely due to severe memory impairment, or possible suboptimal effort or poor task engagement. Although intentionality can rarely be known in an evaluative context regarding effort, we attempted to discern factors that may be more suggestive of severe memory impairment versus factors of effort and poor task engagement during testing. Factors that were specifically examined included patients’ neuropsychological data and report conclusions at the time of WMT administration, neurology notes from subsequent visits to determine progression of cognitive deficits suggestive of a dementing process, available neuroimaging findings, and other medical chart notes that indicated the patient’s observed and reported functional ability. This information had no bearing on initial classifications but was instead used to assist in subsequent determination of WMT classification accuracy. Results Demographics and CDR Classifications Demographic data and CDR scores presented in Table 1 are aggregated into three groups: Unimpaired (CDR 0's and 0.5's that showed normal memory functioning, N = 47); Possible MCI (CDR of 0.5 and that evidenced memory impairment on objective testing, N = 43); and Dementia (CDR of 1 or 2, N = 35). A one-way ANOVA was used to test for group differences in the reported demographic variables, with posthoc comparisons to determine which groups differed from each other. The variables of age, CDR score, and CDR Sums of Boxes violated the assumption of homogeneity of variance, therefore Dunnett’s C (Dunnett, 1980) was used for posthoc comparisons of those indices between the groups, and a Bonferroni correction for multiple comparisons was used otherwise. The Unimpaired group was significantly younger than the Dementia group (p < .05). There were no significant differences between the groups in years of education. For the CDR total score, both the Unimpaired and Possible MCI group differed significantly from the Dementia group (p < .05), but not from each other. For the CDR Sum of Boxes variable, the same pattern was seen with the Unimpaired and Possible MCI groups differing from the Dementia group (p < .05) but not from each other. WMT Scores by CDR Classifications Table 2 displays the mean scores of all of the WMT subtests grouped by CDR diagnostic category. Within this table, the “Unimpaired” group consists of those individuals rated as a CDR of 0 and 0.5 who had objective memory functioning that was deemed within normal limits (within 1.5 SDs on delayed memory indices). The CDR 0.5 “Only Impaired” consists of those individuals who were rated as 0.5 by our CDR raters with accompanying objective memory that was deemed impaired (more than 1.5 SD below the mean). Although not depicted, in the overall sample, 61/125 (48.8%) patients passed the WMT based on performance on the easy subtests. Additionally, there was a stepwise progression in failure of the WMT easy subtests across groups, with 82.2% of the Unimpaired group passing, 40% of the MCI group passing, and only 17.1% of the Dementia group passing using manualized cutoffs. Additionally, we also see a fairly consistent worsening of performance on objective delayed memory measures as CDR ratings and WMT failures increase. One-way ANOVA’s with the WMT indices (IR, DR, C, MC, PAR, and FR) as dependent variables across the different classification groups (Unimpaired CDR 0 + 0.5s, Impaired CDR 0.5 s, CDR s, and CDR 2's) with appropriate posthoc comparisons were conducted to test for group differences in the WMT indices. Significant differences between all groups shown in Table 2 were found (F = 15.34, p < .001 for IR, F = 16.51, p < .001 for DR, F = 16.72, p < .001 for C, F = 33.97, p < .001 for MC, F = 27.75, p < .001 for PA, and F = 25.46, p < .001 for FR; df 3, 124). Homogeneity of variance was violated for the IR and DR indices; therefore Dunnett’s C was used for posthoc comparisons of those indices between the groups. A Bonferroni correction for multiple comparisons was used otherwise. The Unimpaired group (Unimpaired 0 + 0.5s) performed significantly better than all other groups (Impaired 0.5s, CDR 1, and CDR 2) on all WMT indices (p < .05, for IR and DR; p < .001 for C, MC, PAR, FR). The Impaired 0.5 group performed better than the CDR 1 group on DR (p < .05) MC, PAR, and FR (p < .02 for all indices). WMT Performance by CDR Classification for Veterans Passing or Failing the WMT Easy Subtests Tables 3 and 4 display our participant groups based on whether or not they failed IR, DR, or C on the WMT. In the group of WMT passers (Table 3) there were significant differences between all but the DR and C indices (F = 2.89, p = .043 for IR, F = 4.83, p = .005 for MC, F = 6.37, p = .001 for PA, and F = 5.02, p = .004 for FR; df 3, 57). Posthoc comparisons using a Bonferroni correction revealed that the group differences in the IR index were no longer significant after correction. On the MC index, the Unimpaired group significantly differed from the CDR 1 group (p = .046). On the PAR index, the Unimpaired group differed significantly from the Impaired 0.5 and CDR 1 groups (p = .024 and p = .004, respectively). On the FR index, the only significant difference after correction for multiple comparisons was the Unimpaired group performing better than the CDR 1 group (p = .004). Table 3. WMT index scores WMT easy subtest passers by CDR rating   CDR 0 (N = 5)  CDR 0.5 (N = 50)  Unimpaired (CDR 0 + 0.5, N = 37)  Impaired 0.5 (N = 18)  CDR 1 (N = 3)  CDR 2 (N = 3)  WMT IR  97.00 (9.58)  95.65 (4.54)  96.01 (4.47)  95.28 (4.36)  90.00 (2.50)  90.00 (9.01)  WMT DR  98.50 (2.24)  95.86 (3.83)  96.15 (4.11)  96.00 (3.08)  95.00 (5.00)  93.33 (7.64)  WMT C  96.50 (2.85)  92.80 (5.82)  93.38 (5.78)  92.64 (5.65)  86.67 (3.82)  91.67 (8.78)  WMT MC  89.00 (14.32)  75.10 (15.00)  79.86 (15.07)a  68.82 (16.54)  53.33 (28.43)  60.83 (10.10)  WMT PAR  87.00 (9.75)  69.80 (16.71)  75.27 (15.98)b  62.94 (16.11)  40.00 (31.23)  61.67 (10.41)  WMT FR  39.00 (13.99)  33.57 (10.80)  35.95 (11.14)a  30.00 (10.11)  12.50 (4.33)  28.75 (1.77)  CDR SOB  0  1.49 (1.33)  1.19 (0.96)  1.69 (1.89)  5.83 (0.58)  9.67 (0.58)  Delayed Memory Z  −0.27 (1.36)  −0.57 (0.87)  −0.15 (0.52)  −1.46 (0.81)  −1.79 (0.79)  −0.77 (0.61)    CDR 0 (N = 5)  CDR 0.5 (N = 50)  Unimpaired (CDR 0 + 0.5, N = 37)  Impaired 0.5 (N = 18)  CDR 1 (N = 3)  CDR 2 (N = 3)  WMT IR  97.00 (9.58)  95.65 (4.54)  96.01 (4.47)  95.28 (4.36)  90.00 (2.50)  90.00 (9.01)  WMT DR  98.50 (2.24)  95.86 (3.83)  96.15 (4.11)  96.00 (3.08)  95.00 (5.00)  93.33 (7.64)  WMT C  96.50 (2.85)  92.80 (5.82)  93.38 (5.78)  92.64 (5.65)  86.67 (3.82)  91.67 (8.78)  WMT MC  89.00 (14.32)  75.10 (15.00)  79.86 (15.07)a  68.82 (16.54)  53.33 (28.43)  60.83 (10.10)  WMT PAR  87.00 (9.75)  69.80 (16.71)  75.27 (15.98)b  62.94 (16.11)  40.00 (31.23)  61.67 (10.41)  WMT FR  39.00 (13.99)  33.57 (10.80)  35.95 (11.14)a  30.00 (10.11)  12.50 (4.33)  28.75 (1.77)  CDR SOB  0  1.49 (1.33)  1.19 (0.96)  1.69 (1.89)  5.83 (0.58)  9.67 (0.58)  Delayed Memory Z  −0.27 (1.36)  −0.57 (0.87)  −0.15 (0.52)  −1.46 (0.81)  −1.79 (0.79)  −0.77 (0.61)  aSignificantly different than CDR 1. bSignificantly different than Impaired 0.5 and CDR 1 groups. Table 4. WMT index scores WMT easy subtest failures by CDR rating   CDR 0.5 (N = 35)  Unimpaired (CDR 0.5, N = 8)  CDR 0.5 (only Impaired, N = 27)  CDR 1 (N = 21)  CDR 2 (N = 8)  WMT IR  71.93 (11.43)  76.25 (10.35)  70.65 (11.59)  70.60 (10.87)  68.44 (18.85)  WMT DR  76.86 (13.95)  78.44 (11.80)  76.39 (14.70)  67.62 (14.26)  63.44 (18.02)  WMT C  68.79 (9.77)  67.19 (10.56)  69.26 (9.678)  64.88 (10.02)  63.13 (8.74)  WMT MC  35.29 (13.28)  42.50 (11.65)a  33.15 (13.17)b  22.14 (14.02)  22.50 (12.25)  WMT PAR  38.01 (16.05)  41.25 (12.46)  37.02 (17.09)  26.45 (15.17)  21.88 (13.61)  WMT FR  16.69 (8.18)  23.13 (8.10)a  14.71 (7.26)  10.83 (11.50)  7.81 (7.25)  CDR SOB  2.19 (1.11)  2.50 (1.11)  2.09 (1.11)  6.48 (1.63)  9.88 (1.13)  Delayed Memory Z  −1.75 (1.06)  −0.24 (0.69)  −2.20 (0.65)  −2.52 (0.81)  −2.62 (1.08)    CDR 0.5 (N = 35)  Unimpaired (CDR 0.5, N = 8)  CDR 0.5 (only Impaired, N = 27)  CDR 1 (N = 21)  CDR 2 (N = 8)  WMT IR  71.93 (11.43)  76.25 (10.35)  70.65 (11.59)  70.60 (10.87)  68.44 (18.85)  WMT DR  76.86 (13.95)  78.44 (11.80)  76.39 (14.70)  67.62 (14.26)  63.44 (18.02)  WMT C  68.79 (9.77)  67.19 (10.56)  69.26 (9.678)  64.88 (10.02)  63.13 (8.74)  WMT MC  35.29 (13.28)  42.50 (11.65)a  33.15 (13.17)b  22.14 (14.02)  22.50 (12.25)  WMT PAR  38.01 (16.05)  41.25 (12.46)  37.02 (17.09)  26.45 (15.17)  21.88 (13.61)  WMT FR  16.69 (8.18)  23.13 (8.10)a  14.71 (7.26)  10.83 (11.50)  7.81 (7.25)  CDR SOB  2.19 (1.11)  2.50 (1.11)  2.09 (1.11)  6.48 (1.63)  9.88 (1.13)  Delayed Memory Z  −1.75 (1.06)  −0.24 (0.69)  −2.20 (0.65)  −2.52 (0.81)  −2.62 (1.08)  aSignificantly different than CDR 1 and CDR 2 groups. bSignificantly different than CDR 1 group. In the WMT failure group (Table 4) all but four subjects obtained the SIP. Please note that four subjects in the WMT failure group were excluded from later analyses due to missing PAR, FR, or both index scores, therefore the presence the SIP was not able to be calculated for these individuals. In the ANOVA analyses, significant differences were seen for all but the IR and C indices (F = 2.85, p = .045 for DR, F = 6.21, p < .001 for MC, F = 3.77, p = .015 for PA, and F = 4.94, p = .004 for FR; df 3, 63). After using a Bonferroni correction in posthoc analyses, none of the classification groups differed significantly on the DR and PAR indices. For the MC index, the Unimpaired group performed better than the CDR 1 and 2 groups (p = .003 and p = .021, respectively). The Impaired CDR 0.5 group also performed better than the CDR 1 group (p = .034). On the FR index, the Unimpaired group performed better than the CDR 1 and 2 groups (p = .011 and p = .006), respectively. Classification Statistics for SIP Predicting Poor Effort As outlined in the participants and procedures section, a chart review was conducted in an attempt to further ascertain the accuracy of the WMT SIP profile in subsequent classification of individuals who initially failed the easy subtests (IR, DR, and C). Table 5 shows the results of this classification. In this table not having a SIP (e.g., WMT Failure = positive test result) would predict poor effort, whereas having the SIP (WMT SIP = negative test result) would suggest failure on the WMT that was likely due to underlying memory impairment. Among the patients who were presumably correctly classified according to WMT performance, there were three patients who did not have the SIP profile who also had corroborating evidence from chart review which would suggest the presence of poor effort (e.g., True Positive for poor effort). In addition, there were 35 patients who had the SIP and who also had corroborating evidence for cognitive impairment (e.g., True Negative for not having poor effort). Among patients incorrectly classified, one patient did not have the SIP, and would be classified as a poor effort patient due to WMT performance, but had corroborating evidence for significant cognitive impairment (i.e., severe impairment with clear evidence of progressive decline per clinical chart review). Lastly, 21 patients had the SIP but there was corroborating evidence for poor effort based on chart review (e.g., False Negative for having poor effort). Overall, data from this sample support a sensitivity and positive predictive value of 12.5% and 75%, respectively. The specificity and negative predictive value in this sample would be 97% and 62.5%, respectively. The prevalence of poor effort in this sample occurs at approximately 40% using PVT performance in conjunction with chart review for corroborating information. Using this information, we calculated posttest probabilities, which represents the probability of detecting poor effort (e.g., poor effort versus SIP [implied cognitive dysfunction]) based on a positive test result (e.g., no SIP) or negative test result (e.g., SIP [implied cognitive dysfunction]). This resulted in a positive posttest probability of poor effort of 73%, based on positive test result, and 37% for a negative test result (see Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000). Table 5. Accuracy of classification determined solely by SIP   Evidence of poor effort  No evidence of poor effort    No SIP pattern  TP = 3  FP = 1  PPV = 3/(1 + 3) ∗ 100 = 75%  SIP pattern  FN = 21  TN = 35  NPV = 35/(35 + 21) ∗ 100 = 62.5%    Sens. = 3/(3 + 21) = 12.5%  Spec. = 35/(35 + 1) = 97.22%      Evidence of poor effort  No evidence of poor effort    No SIP pattern  TP = 3  FP = 1  PPV = 3/(1 + 3) ∗ 100 = 75%  SIP pattern  FN = 21  TN = 35  NPV = 35/(35 + 21) ∗ 100 = 62.5%    Sens. = 3/(3 + 21) = 12.5%  Spec. = 35/(35 + 1) = 97.22%    Note: SIP = severe impairment profile; TP = true positive; FP = false positive; PPV = positive predictive value; FN = false negative; TN = true negative; NPV = negative predictive value; Sens. = sensitivity; Spec. = specificity. Classification Statistics for SIP Predicting Impaired Cognition In addition to providing information about effort, the utilization of the SIP also implies a portion of the sample is likely cognitively impaired. That is, while not having the SIP is meant to describe individuals who are likely to produce unreliable test performance, having the SIP is meant to suggest that the response inconsistencies would be secondary to expected memory deficits consistent with a diagnosis of dementia or other diagnosis capable of producing moderate to severe cognitive impairment. In this sense, a corollary table was created to examine SIP profiles as being related to evidence for cognitive impairment (i.e., the converse of not obtaining the SIP – if the SIP is obtained, then there is evidence for cognitive impairment). Table 6 shows the results of this classification. Among the patients who were correctly classified, there were 35 patients with the SIP profile who also had had corroborating evidence from chart review which would suggest the presence of cognitive impairment (e.g., True Positive for identifying genuine impairment). In addition, there were three patients who did not have the SIP profile and also had corroborating evidence that would not support the presence of genuine impairment (e.g., True Negative for having genuine impairment). Among patients incorrectly classified, 21 patients had the SIP but there was corroborating evidence from chart review which would not support the presence of severe impairment (e.g., False Positive for having genuine impairment). Lastly, one patient did not have the SIP but corroborating evidence from chart review (severe cognitive impairment with clear progressive decline) demonstrated genuine cognitive impairment (e.g., False Negative for predicting genuine impairment). Overall, data from this sample support a sensitivity and positive predictive value of 97% and 62.5%, respectively. The specificity and negative predictive value in this sample would be 12.5% and 75%, respectively. Based on this sample, the cognitive impairment base rate occurs at approximately 63%. Using this information, we calculated posttest probabilities, which represents the probability of detecting cognitive impairment (e.g., SIP [implied cognitive dysfunction] vs. poor effort) based on a positive test result (e.g., having SIP [implied cognitive dysfunction]) or negative test result (e.g., no SIP). This resulted in a positive posttest probability of impairment of 65%, based on positive test result, and 28% for a negative test result. Table 6. Accuracy of classification determined by chart review   Evidence of poor effort  No evidence of poor effort    SIP Pattern  TP = 35  FP = 21  PPV = 35/(35 + 21) ∗ 100 = 62.5%  No SIP Pattern  FN = 1  TN = 3  NPV = 3/(1 + 3) ∗ 100 = 75%    Sens. = 35/(35 + 1) = 97.22%  Spec. = 3/(3 + 21) = 12.5%      Evidence of poor effort  No evidence of poor effort    SIP Pattern  TP = 35  FP = 21  PPV = 35/(35 + 21) ∗ 100 = 62.5%  No SIP Pattern  FN = 1  TN = 3  NPV = 3/(1 + 3) ∗ 100 = 75%    Sens. = 35/(35 + 1) = 97.22%  Spec. = 3/(3 + 21) = 12.5%    Note: SIP = severe impairment profile; TP = true positive; FP = false positive; PPV = positive predictive value; FN = false negative; TN = true negative; NPV = negative predictive value; Sens. = sensitivity; Spec. = specificity. Discussion The current study sought to replicate findings from Green and colleagues (2011) and examined the SIP of the WMT in a sample of older US veterans referred for neuropsychological testing due to memory complaints concerning for dementia. As hypothesized, increasing level of impairment as indicated by CDR scores was associated with a higher likelihood of performing below the recommended cut scores for the easy WMT subtests. For patients who failed the easy subtest items, further characterization of the SIP was made through integration of additional chart and neuropsychological test data with results similar to Axelrod and Schutte (2010). The current study demonstrated high specificity levels (0.97), which is similar to that of Green and colleagues (2011) (0.98); however, this was tempered by a lower positive predictive value (0.75) and poor sensitivity (0.12). The low sensitivity values are important, as 93% of patients (56/60) demonstrated the SIP, but the NPV associated with this classification was 0.62. A primary purpose of a PVT, such as the WMT, is to aid the clinician in distinguishing those who perform suboptimally versus those who do not. This dichotomy regarding poor effort is reasonably made based on PVT performance. In contrast, inferences regarding optimal effort on a continuum, intentionality, or the presence or absence of neurological disease cannot be reasonably made based on PVT performance and generally necessitates additional integration of data from the neuropsychologist. The SIP methodology, as currently proposed by Green and colleagues (2011), appears to move away from this inferential dichotomy and subsequently creates the potential for providers to affirm support for a neurologic diagnosis based on WMT SIP performance. For example, among patients referred for neuropsychological evaluation in the context of possible neurologic disease (e.g., dementia, vascular disease, epilepsy, etc.) a passing WMT performance on the “easy” items results in a determination that there was no evidence for poor effort. In contrast, a failure on the WMT easy items might prompt the neuropsychologist to subsequently examine the SIP to further ascertain effort. In this case the question changes from evidence for “poor effort” versus “no evidence for poor effort” to evidence for “poor effort” (e.g., Easy-Hard subtest average difference <30 points) versus “affirmation of severe impairment” (e.g., SIP as currently conceptualized). While it could be argued that the inference is merely support for “poor effort” versus “unknown,” the mere labeling of the profile as the “SIP” (or GMIP) is laden with meaning and suggests to providers an affirmation of disease state. SIP as an inferential dichotomy of “poor effort” versus “severe impairment” represents a conceptual shift from the original intent of the WMT (e.g., performance on the easy subtests), or forced choice PVTs in general, and appears to be only tenuously be supported by the current study. It is noteworthy that the higher the CDR score, the higher probability for impairments on neuropsychological testing, and the greater likelihood for WMT failure of easy test items. However, of those individuals performing below the cutoff on the WMT with complete data, 93.3% (56/60) displayed the SIP: The SIP almost always predicts WMT profile performance to be consistent with neurological disease. However, when obtaining the SIP is viewed as a positive test for neurologic disease (i.e., as implied by the “poor effort” vs. “genuine impairment dichotomy”) the diagnostic accuracy of identifying “impairment” in the current sample is modest at best (PPV = 62% for identifying “severe impairment,” see Table 6). Establishing a WMT profile score to minimize false positive errors would improve its originally intended purpose as a PVT in a dementia sample; however, the SIP is observed too frequently and is often inconsistent with additional medical information. The practical utility and empirical support for use of the SIP in the current sample is questionable. Ascertaining variable effort in neuropsychological assessment remains a challenging process and the WMT remains one of the most powerful tools available to providers. It has demonstrated consistently excellent psychometric properties in traumatic brain injury samples (e.g., Green, Rohling, Lees-Haley, & Allen, 2001; Iverson, Green, & Gervais, 1999; Proto et al., 2014). The use of the WMT, and other PVTs, in other neurological samples remains fully warranted. However, additional studies regarding the utility of PVT performance among individuals at risk for degenerative disorders should be undertaken. As previously indicated, many PVTs approximate the format of a forced choice memory task and the WMT heavily relies on a recognition memory strategy for successful completion of the easy subtests. Although speculative, it remains possible that WMT test properties may be less reliable for patients with amnestic profiles (e.g., amnestic MCI to AD), as opposed to other neurological populations where mesial temporal memory processes are less directly impaired. For example, it is generally accepted that disparate neurologic substrates mediate dissociable aspects of memory (e.g., recognition versus familiarity; Aggleton & Brown, 2006; but also see Squire, Wixted, & Clark, 2008) and various neurologic disorders are postulated to produce such impairments (e.g., Koen & Yonelinas, 2014). It has also been suggested that intratemporal specialization exists, with tasks high on a “semantico-syntactic” continuum dependent on the anterior portion of the inferior temporal gyrus and to a lesser extent perirhinal activity (Saling, 2009). At this point we believe that additional research in at-risk populations is needed to better characterize PVT failure in older patients, which has been shown to be higher than in younger, non-litigation seeking populations (e.g., Rienstra et al., 2013), and it is possible that population specific cut-scores may be appropriate. The current study has several strengths. First, this study evaluated an older population that is at high risk for failure of PVTs both for genuine memory impairment and due to possible secondary gain factors, leading to a reasonable expectation that some patients would display non-credible performance even with the SIP present. Second, the current study was able to utilize medical records from both before and after the neuropsychological evaluations which created the opportunity to determine if impairment was likely due to a degenerative condition causing genuine memory impairment or if it was related to non-credible performance due to possible secondary gain. An additional strength of the study is the use of patient evaluations conducted as part of routine clinical care, while screening out individuals in which a degenerative condition would have been atypical (e.g., under the age of 55). Last, this study was able to utilize various methods for the judgment of impairment in the presence of possible dementia through the use of trained CDR raters, objective memory performance, and performance on the WMT. The current study has several limitations. The data utilized did not include detailed diagnostic information which would have allowed further exploration of PVT failures in relation to expected deficit patterns. Another limitation is that the current study utilizes data collected as part of routine clinical care and did not use a standardized battery of tests. Such a standardized procedure would have allowed for closer inspection of objective performance as it relates to failed effort indicators and SIP. Additionally, although the use of corroborating evidence from chart review adds to the literature, it may be helpful for future research to more tightly define and operationalize the variables included in chart review to minimize the possible subjectivity of such reviews. In conclusion, the results of the current study suggest that the context of the evaluation is important in the utilization of PVTs and interpretation of cutoffs used to determine genuine memory impairment from non-credible performance in dementia at risk populations. Whereas, in theory, a SIP would be useful if it accurately identified those performing suboptimally due to inadequate effort, the current iteration of the SIP appears to identify almost all patients who fail the WMT easy subtests as genuinely impaired (even when other clinical evidence would suggest that some of those patients likely do not have severe memory impairment) which limits the clinical utility of the profile. Broader concern is also expressed regarding the SIP, as the SIP itself represents a conceptual shift wherein the SIP may be used to affirm disease state. Conflict of Interest None declared. References Aggleton, A. P., & Brown, M. W. ( 2006). Interleaving brain systems for episodic and recognition memory. Trends in cognitive sciences , 10, 455– 463. Google Scholar CrossRef Search ADS PubMed  Armistead-Jehle, P. ( 2010). Symptom Validity Test performance in US Veterans referred for evaluation of mild TBI. Applied Neuropsychology , 17, 52– 59. Google Scholar CrossRef Search ADS PubMed  Axelrod, B. N., & Schutte, C. ( 2010). Analysis of the dementia profile on the Medical Symptom Validity Test. The Clinical Neuropsychologist , 24, 873– 881. Google Scholar CrossRef Search ADS PubMed  Bush, S. S., & Morgan, J. E. ( 2012). Improbable neuropsychological presentations: Assessment, diagnosis, and management. In S. S. Bush (Ed.), Neuropsychological practice with veterans  (pp. 27– 44). New York: Springer Publishing Company. Brandt, J., & Benedict, R. ( 2001). Hopkins Verbal Learning Test – Revised (HVLT-R) . Lutz, FL: PAR. Bush, S. S., Ruff, R. M., Troster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., et al.  . ( 2005). Symptom Validity Assessment: Practice issues and medical necessity: NAN Policy and Planning Committee. Archives of Clinical Neuropsychology , 20, 419– 426. Google Scholar CrossRef Search ADS PubMed  Clark, L. R., Stricker, N. H., Libon, D. J., Delano-Wood, L., Salmon, D. P., Delis, D. C., et al.  . ( 2012). Yes/No forced choice recognition memory in mild cognitive impairment and Alzheimer’s disease: Patterns of impairment and associations with dementia severity. Clinical Neuropsychology , 26, 1201– 1216. Google Scholar CrossRef Search ADS   Dean, A. C., Victor, T. L., Boone, K. B., Philpott, L. M., & Hess, R. A. ( 2009). Dementia and effort test performance. The Clinical Neuropsychologist , 23, 133– 152. Google Scholar CrossRef Search ADS PubMed  Delis, D., Kramer, J. H., Kaplan, E., & Ober, B. ( 2000). The California Verbal Learning Test: Adult Version . San Antonio, TX: The Psychological Corporation. Dunnett, C. W. ( 1980). Pair wise multiple comparisons in the unequal variance case. Journal of the American Statistical Association , 75, 796– 800. Google Scholar CrossRef Search ADS   Green, P., Lees-Haley, P. R., & Allen, L. M. ( 2002). The Word Memory Test and the Validity of Neuropsychological Test Scores. Forensic Neuropsychology , 2, 97– 124. Google Scholar CrossRef Search ADS   Green, P. ( 2003). Green’s Word memory Test for Microsoft Windows: User’s manual . Edmonton, Canada: Green’s Publishing. Green, P. ( 2004). Manual for the Medical Symptom Validity Test . Edmonton, Canada: Green’s Publishing. Green, P., Rohling, M. L., Lees-Haley, P. R., & Allen, L. M. ( 2001). Effort has a greater effect on test scores than severe brain injury in compensation claimants. Brain Injury , 15, 1045– 1060. Google Scholar CrossRef Search ADS PubMed  Green, P., Flaro, L., & Courtney, J. ( 2009). Examining false positives on the Word memory Test in adults with mild traumatic brain injury. Brain Injury , 23, 741– 750. Google Scholar CrossRef Search ADS PubMed  Green, P., Montijo, J., & Brockhaus, R. ( 2011). High specificity of the Word Memory Test and the Medical Symptom Validity Test in groups with severe verbal memory impairment. Applied Neuropsychology , 18, 86– 94. Google Scholar CrossRef Search ADS PubMed  Greve, K. W., Ord, J., Curtis, K. L., Bianchini, K. J., & Brennan, A. ( 2008). Detecting malingering in traumatic brain injury and chronic pain: A comparison of three forced-choice symptom validity tests. The Clinical Neuropsychologist , 22, 896– 918. Google Scholar CrossRef Search ADS PubMed  Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., & Millis, S. R., Conference Participants. ( 2009). American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist , 23, 1093– 1129. Google Scholar CrossRef Search ADS PubMed  Howe, L., & Loring, D. ( 2009). Classification accuracy and predictive ability of the Medical Symptom Validity Test’s dementia profile and genuine memory impairment profile. The Clinical Neuropsychologist , 23, 329– 342. Google Scholar CrossRef Search ADS PubMed  Iverson, G., Green, P., & Gervais, R. ( 1999). Using the Word Memory Test to detect biased responding in head injury litigation. The Journal of Cognitive Rehabilitation , 17, 4– 8. Kiewel, N. A., Wisdom, N. M., Bradshaw, M. R., Pastorek, N. J., & Strutt, A. M. ( 2012). A retrospective review of Digit Span-related effort indicators in probable Alzheimer’s disease patients. The Clinical Neuropsychologist , 26, 965– 974. Google Scholar CrossRef Search ADS PubMed  Koen, J. D., & Yonelinas, A. P. ( 2014). The effects of healthy aging, amnestic mild cognitive impairment, and Alzheimer’s disease on recollection and familiarity: A meta-analytic review. Neuropsychology Review , 24, 332– 354. Google Scholar CrossRef Search ADS PubMed  Morris, J. C. ( 1993). The clinical dementia rating scale (CDR): Current version and scoring rules. Neurology , 43, 2412– 2414. Google Scholar CrossRef Search ADS PubMed  Morris, J. C., Ernesto, C., Schafer, K., Coats, M., Leon, S., Sano, M., et al.  . ( 1997). Clinical dementia rating training and reliability in multicenter studies: The Alzheimer’s Disease Cooperative Study experience. Neurology , 48, 1508– 1510. Google Scholar CrossRef Search ADS PubMed  Proto, D. A., Pastorek, N. J., Miller, B. I., Romesser, J. M., Sim, A. H., & Linck, J. F. ( 2014). The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms. Archives of Clinical Neuropsychology , 29, 614– 624. Google Scholar CrossRef Search ADS PubMed  Randolph, C. ( 1998). Repeatable battery for the assessment of neuropsychological status (RBANS) . San Antonio, TX: Harcourt, The Psychological Corporation. Rienstra, A., Groot, P. F., Spaan, P. E., Majoie, C. B., Nederveen, A. J., Walstra, G. J., et al.  . ( 2013). Symptom validity testing in memory clinics: Hippocampal-memory associations and relevance for diagnosing mild cognitive impairment. Journal of Clinical and Experimental Neuropsychology , 35, 59– 70. Google Scholar CrossRef Search ADS PubMed  Rienstra, A., Twennaar, M. K., & Schmand, B. ( 2013). Neuropsychological characterization of patients with the WMT dementia profile. Archives of Clinical Neuropsychology , 28, 463– 475. Google Scholar CrossRef Search ADS PubMed  Sackett, D., Straus, S., Richardson, W. S., Rosenberg, W., & Haynes, R. B. ( 2000). Evidence-based medicine: How to practice and teach EBM  ( 2nd ed.). St. Louis: Churchill Livingston. Saling, M. M. ( 2009). Verbal memory in mesial temporal lobe epilepsy: Beyond material specificity. Brain , 132, 570– 582. Google Scholar CrossRef Search ADS PubMed  Singhal, A., Green, P., Ashaye, K., & Gill, D. ( 2009). High specificity of the Medical Symptom Validy Test in patients with very severe memory impairment. Archives of Clinical Neuropsychology , 24, 721– 728. Google Scholar CrossRef Search ADS PubMed  Squire, L. R., Wixted, J. T., & Clark, R. E. ( 2008). Recognition memory and the medial temporal lobe: A new perspective. Nature Reviews Neuroscience , 8, 872– 883. Google Scholar CrossRef Search ADS   Teichner, G., & Wagner, M. T. ( 2004). The Test of Memory Malingering (TOMM): Normative data from cognitively intact, cognitively impaired, and elderly patients with dementia. Archives of Clinical Neuropsychology , 19, 455– 464. Google Scholar CrossRef Search ADS PubMed  Tombaugh, T. N. ( 1996). TOMM: Test of Memory Malingering Manual . North Tonowanda, NY: Multi-Health System, Inc. Vickery, C. D., Berry, D. T., Inman, T. H., Harris, M. J., & Orey, S. A. ( 2001). Detection of inadequate effort on neuropsychological testing: A meta-analytic review of selected procedures. Archives of Clinical Neuropsychology , 16, 45– 73. Google Scholar PubMed  Wechsler, D. ( 2009). Wechsler Memory Scale-Fourth Edition (WMS-IV) . San Antonio, TX: Pearson Assessment. Whearty, K. M., Allen, D. N., Lee, B. G., & Strauss, G. P. ( 2015). The evaluation of insufficient cognitive effort in schizophrenia in light of low IQ scores. Journal of Psychiatric Research , 68, 397– 404. Google Scholar CrossRef Search ADS PubMed  Young, J. C., Kearns, L. A., & Roper, B. L. ( 2011). Validation of the MMPI-2 Response Bias Scale and Henry-Heilbronner Index in a U.S. veteran population. Archives of Clinical Neuropsychology , 26, 194– 204. Google Scholar CrossRef Search ADS PubMed  Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Clinical Neuropsychology Oxford University Press

Loading next page...
 
/lp/ou_press/the-severe-impairment-profile-a-conceptual-shift-cnIl9HkwyX
Publisher
Oxford University Press
Copyright
Published by Oxford University Press 2017.
ISSN
0887-6177
eISSN
1873-5843
D.O.I.
10.1093/arclin/acx069
Publisher site
See Article on Publisher Site

Abstract

Abstract Objective The current study sought to evaluate and replicate the severe impairment profile (SIP) of the Word Memory Test (WMT) in patients referred for dementia evaluations. Method The sample consisted of 125 patients referred for a neuropsychological evaluation at a large Veterans Affairs Medical Center. Patients were assigned a Clinical Dementia Rating (CDR) by blind raters, and were classified according to their performance on performance validity testing. Subsequent chart reviews were conducted to help in more accurately determining the presence of severe memory impairment likely due to an underlying dementing process versus poor effort/task engagement. Results In our sample, 51% of patients failed easy WMT subtests and 93% of these patients obtained the SIP. The rates of failure on these easy subtests generally coincided with both more severely impaired CDR ratings, as well as more impaired delayed memory composite scores. Upon chart review, it was determined that there were likely a significant portion of classification errors using the SIP, with a positive posttest probability of impairment based on having the SIP being 65% as opposed to 28% for a negative result. Conclusions Our findings suggest that the SIP does not appear to function effectively in a mixed dementia sample where there is increased potential for secondary gain. Additional concern is expressed regarding the overall likelihood of obtaining the SIP and subsequent inferential decisions related to obtaining an SIP. Future research should examine more optimal cut scores or alternative methods for more accurately classifying patients in different clinical contexts and patterns of impairment. Dementia, Malingering/Symptom Validity Testing, Assessment, Elderly/geriatrics/aging, Learning and memory Introduction Performance validity tests (PVTs) have been proposed as necessary components of neuropsychological evaluations both by the American Academy of Clinical Neuropsychology (Heilbronner, Sweet, Morgan, Larrabee, & Millis, 2009) and the National Academy of Neuropsychology (Bush et al., 2005). Strong advocacy for PVT inclusion by each member association is predicated on an extensive body of literature demonstrating their specificity in differentiating between honest responders and dissimulators (Vickery, Berry, Inman, Harris, & Orey, 2001). Much of this research has been conducted among individuals with a history of mild traumatic brain injury in settings with identifiable secondary gain, including litigation for injury (e.g., Green, Flaro, & Courtney, 2009; Green, Lees-Haley, & Allen, 2002; Greve, Ord, Curtis, Bianchini, & Brennan, 2008) or compensable military service-related injuries (Armistead-Jehle, 2010). However, research in the past decade has increasingly focused on the classification accuracy of PVTs in non-TBI populations where genuine impairments in cognition are not uncommon (Dean, Victor, Boone, Philpott, & Hess, 2009; Kiewel, Wisdom, Bradshaw, Pastorek, & Strutt, 2012; Whearty, Allen, Lee, & Strauss, 2015). PVT performance in older adults diagnosed with mild cognitive impairment (MCI) or dementia has received much attention as many PVTs approximate the format of a memory task. Using standard cut-scores published in the manual, the Test of Memory Malingering (TOMM) has shown failure rates ranging from 27% to 76% in these populations (Teichner & Wagner, 2004; Tombaugh, 1996). An unknown portion of these “failures” are likely false-positives as suggested by the dose–response relationship between failure rates and severity of dementia as indexed by mental status scores as well as diagnostic categorizations from objective testing instruments (Clark et al., 2012). A recent review paper compellingly demonstrates that many commonly used standalone and embedded PVTs have unacceptable rates of falsely identifying poor effort in individuals diagnosed with dementia (Dean et al., 2009). As a result of this decreased specificity, Dean et al. suggest alternative methods for evaluating effort in populations with dementia such as adjusting cut scores to minimize Type I error. Borne from this need, Green (2004) developed a dementia profile [often referenced as the Genuine Memory Impairment Profile (GMIP) or the Severe Impairment Profile (SIP)] on the Medical Symptom Validity Test (MSVT) and the Nonverbal Medical Symptom Validity Test (NV-MSVT). He suggested that a 20-point difference in performance between the average of easy and hard subtests when the measure was failed according to published cutoffs alongside certain recall indices not falling below free recall of test stimuli adequately identified those test takers with legitimate memory impairments. Howe and Loring (2009) independently examined this dementia profile (heretofore referred to as SIP) in a clinical sample and found a false positive classification rate of 5.8%. The patients who did not conform to the SIP in this sample failed to do so due to their free recall scores being equal or greater than recognition recall indices. Singhal, Green, Ashaye, and Gill (2009) also found SIP in the MSVT to have high diagnostic accuracy as it correctly classified everyone in their sample of patients institutionalized with severe dementia. More recently, Green, Montijo, and Brockhaus (2011) examined the SIP in a sample of Spanish speaking patients referred to a memory disorders clinic for dementia evaluation. Classification accuracy of both the MSVT and the Word Memory Test (WMT; Green, 2003) was examined using an easy-hard subtest difference score of at least 30 points (based on previously collected data [Green, 2003]). Dementia severity was primarily classified using the Clinical Dementia Rating (CDR) scale (Morris et al., 1997; Morris, 1993) into unimpaired, possible mild cognitive impairment, and dementia groups, although it should be noted that neuropsychological test data was used to reclassify some patients after completion of CDR rating. In this sample, all patients in the unimpaired group (N = 19) passed the easy WMT subtests, which was noted to be similar to a healthy adult sample collected as part of the WMT normative data (Green, 2003). In contrast, the possible MCI group and dementia groups failed the easy WMT subtests at rates of 21.6% and 63%, respectively. When the profiles of those patients failing the easy subtests were examined, it was found that every case exhibited at least a 30-point difference between the average of the easy (IR, DR, and C) and hard subtests (MC, PAR, and FR), indicative of the theoretical SIP. The authors concluded that use of the SIP minimizes or altogether eliminates false positive errors. Rienstra, Twennaar, and Schmand (2013) similarly demonstrated that individuals with the SIP on the WMT tended to decline at greater rates on objective testing at 2-year follow-up than those deemed to have non-credible performances. While we agree with the general methodology utilized by Green and colleagues (2011), concerns remain regarding the widespread application of the WMT SIP. For example, the sample from Green et al. was drawn from a general memory disorders clinic where there did not appear to be overt concern with regard to external incentives and the delayed recall interval was also reduced in an attempt to reduce time demands. Both of these issues may limit generalizability to other settings. In contrast, Axelrod and Schutte (2010) demonstrated the SIP of tests such as the MSVT did not adequately discriminate “poor effort” from genuine dementia when the profile was used in conjunction with objective neuropsychological testing in a patient population where there is access to compensation incentives. Although the influence of compensation incentives was not a primary focus of their study, such pressures do exist within the VA system (e.g., see Bush & Morgan, 2012; Young, Kearns, & Roper, 2011). Additionally, Green et al. utilized the CDR, a system which relies on self-report and clinical information collected during a single time-point, as the primary means to determine dementia severity, rather than monitoring the patients’ trajectories over several years to identify possible conversion from MCI to dementia. As demonstrated by Rienstra and colleagues (2013), longitudinal tracking of patients and outcomes (i.e., stability, conversion, and reversion) can better establish the predictive validity of the SIP. Lastly, the underlying rationale behind the SIP has been questioned as being circular as it requires a determination to be made about the possibility of dementia prior to administering the test (Axelrod & Schutte, 2010). Using a similar methodology, the current study sought to replicate and extend the findings of the Green and colleagues (2011) WMT SIP among veterans referred for a dementia evaluation at a large Veterans Affairs (VA) hospital, where secondary gain can be pursued in the form of service connected disability benefits (Bush & Morgan, 2012; Young et al., 2011). In addition to WMT performance, this study also tracked patient clinical course to better establish the predictive validity of the WMT SIP. Although the overarching goal of the current study was to further evaluate the diagnostic accuracy of the WMT SIP using several sources of information to better classify credible versus possible non-credible performance at the time of testing, a secondary goal of this study was to further consider the underlying logic and diagnostic accuracy of the SIP. That is, while the WMT provides information regarding the presence of “poor effort” versus “no evidence for poor effort”, the WMT SIP is conceptually different and implies an inferential dichotomy of “poor effort” versus “cognitive impairment.” This study sought to further examine the accuracy associated with using a the WMT SIP as an indicator of cognitive impairment. Participants and Procedures The sample was a retrospective review of patients referred for an outpatient neuropsychological evaluation through the Neurology department at a large Veterans Affairs Medical Center for evaluation of cognitive functioning for suspected dementia from 2010 to 2013. Patients were excluded from the study if they were under 55 years of age, did not receive the WMT as part of their comprehensive cognitive evaluation, or if they met diagnostic criteria for a confusional state/delirium, resulting in a final sample which consisted of 125 patients. See Table 1 for demographic information related to the overall sample. Table 1. Demographics and CDR’s of sample groups   Unimpaired 0 s and 0.5 s (N = 45)  Impaired 0.5 s (N = 45)  CDR 1 and 2 (N = 35)  Overall total (N = 125)  Age  66.43 (6.27)  68.67 (6.77)  71.74 (8.86)a  68.69 (7.47)  Years of education  13.49 (2.91)  12.65 (2.60)  12.20 (2.45)  12.84 (2.71)  CDR rating  0.48 (0.16)  0.50 (0.00)  1.31 (0.47)b  0.71 (0.46)  CDR SOB  1.36 (1.11)  2.02 (1.43)  7.47 (2.12)b  3.30 (3.05)    Unimpaired 0 s and 0.5 s (N = 45)  Impaired 0.5 s (N = 45)  CDR 1 and 2 (N = 35)  Overall total (N = 125)  Age  66.43 (6.27)  68.67 (6.77)  71.74 (8.86)a  68.69 (7.47)  Years of education  13.49 (2.91)  12.65 (2.60)  12.20 (2.45)  12.84 (2.71)  CDR rating  0.48 (0.16)  0.50 (0.00)  1.31 (0.47)b  0.71 (0.46)  CDR SOB  1.36 (1.11)  2.02 (1.43)  7.47 (2.12)b  3.30 (3.05)  aSignificantly different than unimpaired group. bSignificantly different than unimpaired and CDR 1 and 2 groups. All 125 cases were retrospectively classified using the Clinical Dementia Rating Scale (CDR) by a neuropsychologist with formal training in CDR rating (BM), and a board-certified neurologist (VP), who arrived at consensus ratings. The CDR reviewers were provided with a de-identified folder for each patient with relevant progress notes leading up to, and including, the initial neurology evaluation, and MMSE which was conducted during a subsequent neuropsychological evaluation. The CDR reviewers did not review any patient neuropsychological test data or any medical data following the initial neurology evaluation. The neuropsychologist reviewing the de-identified patient test data was not involved in the neuropsychological assessment, whereas the reviewing neurologist was involved in the work-up of approximately 30% of the cases (but remained blinded to patient folders). After the initial clinical review by the CDR raters, five patients obtained a CDR of 0, 85 patients obtained a CDR of 0.5, 24 obtained a CDR of 1 and 11 obtained a CDR of 2. Following initial CDR assessment, and in a manner similar to Green and colleagues (2011), CDR ratings groups were further characterized according to patient interview and neuropsychological data. Patients were characterized into an “Unimpaired” group (n = 45) if they had a CDR of 0 or 0.5 in addition to objective memory performance that fell within normal limits, as defined by delayed free recall memory demographically adjusted scores falling within 1.5 SD of population average performance. Patients were characterized into a “Possible MCI” group (n = 45) if they had a CDR rating of 0.5 and were found to have delayed memory performance that fell below 1.5 SD of the demographically adjusted population average. Two additional dementia groupings were included, CDR = 1 and CDR = 2, to allow for examination of dose–response relationships between PVT and neuropsychological test performance with increasing dementia severity. These dementia groups were categorized based on CDR scores of 1 (n = 24) and 2 (n = 11). See Table 2 for a breakdown of this information. Table 2. WMT index scores for all subjects by CDR rating   CDR 0 (N = 5)  CDR 0.5 (N = 85)  Unimpaired (CDR 0 + 0.5, N = 45)  Impaired 0.5 (N = 45)  CDR 1 (N = 24)  CDR 2 (N = 11)  WMT IR  97.00 (9.58)  85.88 (14.24)  92.50 (9.58)a  80.50 (15.35)  73.02 (12.09)  74.32 (19.14)  WMT DR  98.50 (2.24)  88.04 (13.26)  93.00 (9.10)a  84.23 (15.02)b  71.04 (16.27)  71.59 (20.84)  WMT C  96.50 (2.85)  82.91 (14.13)  88.72 (12.15)a  78.61 (14.21)b  67.60 (11.94)  70.91 (15.70)  WMT MC  89.00 (14.32)  58.51 (24.70)  73.22 (20.40)a  49.63 (22.70)b  26.04 (18.77)  32.95 (21.12)  WMT PAR  87.00 (9.75)  56.78 (22.68)  69.22 (20.17)a  47.27 (20.91)b  28.30 (17.68)  32.73 (22.29)  WMT FR  39.00 (13.99)  26.66 (12.85)  33.67 (11.69)a  20.76 (11.29)b  11.07 (10.71)  12.00 (10.92)  WMT Pass  5 (100%)  50 (58.8%)  37 (82.2%)  18 (40.0%)  3 (12.5%)  3 (27.3%)  CDR SOB  0  1.78 (1.28)  1.42 (1.10)  1.93 (1.46)  6.40 (1.55)  9.82 (0.98)  Delayed Memory Z  −0.27 (1.36)  −1.06 (1.11)  −0.17 (0.55)  −1.92 (0.79)  −2.43 (0.83)  −2.11 (1.28)    CDR 0 (N = 5)  CDR 0.5 (N = 85)  Unimpaired (CDR 0 + 0.5, N = 45)  Impaired 0.5 (N = 45)  CDR 1 (N = 24)  CDR 2 (N = 11)  WMT IR  97.00 (9.58)  85.88 (14.24)  92.50 (9.58)a  80.50 (15.35)  73.02 (12.09)  74.32 (19.14)  WMT DR  98.50 (2.24)  88.04 (13.26)  93.00 (9.10)a  84.23 (15.02)b  71.04 (16.27)  71.59 (20.84)  WMT C  96.50 (2.85)  82.91 (14.13)  88.72 (12.15)a  78.61 (14.21)b  67.60 (11.94)  70.91 (15.70)  WMT MC  89.00 (14.32)  58.51 (24.70)  73.22 (20.40)a  49.63 (22.70)b  26.04 (18.77)  32.95 (21.12)  WMT PAR  87.00 (9.75)  56.78 (22.68)  69.22 (20.17)a  47.27 (20.91)b  28.30 (17.68)  32.73 (22.29)  WMT FR  39.00 (13.99)  26.66 (12.85)  33.67 (11.69)a  20.76 (11.29)b  11.07 (10.71)  12.00 (10.92)  WMT Pass  5 (100%)  50 (58.8%)  37 (82.2%)  18 (40.0%)  3 (12.5%)  3 (27.3%)  CDR SOB  0  1.78 (1.28)  1.42 (1.10)  1.93 (1.46)  6.40 (1.55)  9.82 (0.98)  Delayed Memory Z  −0.27 (1.36)  −1.06 (1.11)  −0.17 (0.55)  −1.92 (0.79)  −2.43 (0.83)  −2.11 (1.28)  aSignificantly different than impaired 0.5, CDR 1, and CDR 2. bSignificantly different than unimpaired and CDR 1. The computerized WMT was administered to all participants in English. Administration was completed using manual guidelines, with the administrator leaving the room during the presentation and testing portions of the WMT, although the examiner checked with the patient several times to ensure the patient understood and was following directions during the task. While Green and colleagues (2011) halved the delayed recall time to 15 for participants, the current study used the manual based 30 min delay between immediate and delayed recall. In addition to the administration of the WMT, each patient was clinically interviewed and participated in comprehensive neuropsychological testing as part of their clinical visit. Most patients completed a standard neuropsychological battery which included the California Verbal Learning Test – second edition (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) and the Logical Memory subtests from the Wechsler Memory Scales – fourth edition (WMS-IV; Wechsler, 2009). During the clinical interview, a small portion of patients deemed to be significantly impaired, or unable to tolerate extensive testing, completed a shorter test battery which included memory measures from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS; Randolph, 1998) or the Hopkins Verbal Learning Test – Revised (HVLT-R; Brandt & Benedict, 2001). A comprehensive chart review was completed to aid in follow-up analyses examining possible suboptimal performance on testing. This chart review was conducted by licensed neuropsychologists (RC and BM) as well as a second-year fellow in clinical neuropsychology (JR). This chart review included examining the patient’s medical chart to determine the likelihood that their PVT failure was likely due to severe memory impairment, or possible suboptimal effort or poor task engagement. Although intentionality can rarely be known in an evaluative context regarding effort, we attempted to discern factors that may be more suggestive of severe memory impairment versus factors of effort and poor task engagement during testing. Factors that were specifically examined included patients’ neuropsychological data and report conclusions at the time of WMT administration, neurology notes from subsequent visits to determine progression of cognitive deficits suggestive of a dementing process, available neuroimaging findings, and other medical chart notes that indicated the patient’s observed and reported functional ability. This information had no bearing on initial classifications but was instead used to assist in subsequent determination of WMT classification accuracy. Results Demographics and CDR Classifications Demographic data and CDR scores presented in Table 1 are aggregated into three groups: Unimpaired (CDR 0's and 0.5's that showed normal memory functioning, N = 47); Possible MCI (CDR of 0.5 and that evidenced memory impairment on objective testing, N = 43); and Dementia (CDR of 1 or 2, N = 35). A one-way ANOVA was used to test for group differences in the reported demographic variables, with posthoc comparisons to determine which groups differed from each other. The variables of age, CDR score, and CDR Sums of Boxes violated the assumption of homogeneity of variance, therefore Dunnett’s C (Dunnett, 1980) was used for posthoc comparisons of those indices between the groups, and a Bonferroni correction for multiple comparisons was used otherwise. The Unimpaired group was significantly younger than the Dementia group (p < .05). There were no significant differences between the groups in years of education. For the CDR total score, both the Unimpaired and Possible MCI group differed significantly from the Dementia group (p < .05), but not from each other. For the CDR Sum of Boxes variable, the same pattern was seen with the Unimpaired and Possible MCI groups differing from the Dementia group (p < .05) but not from each other. WMT Scores by CDR Classifications Table 2 displays the mean scores of all of the WMT subtests grouped by CDR diagnostic category. Within this table, the “Unimpaired” group consists of those individuals rated as a CDR of 0 and 0.5 who had objective memory functioning that was deemed within normal limits (within 1.5 SDs on delayed memory indices). The CDR 0.5 “Only Impaired” consists of those individuals who were rated as 0.5 by our CDR raters with accompanying objective memory that was deemed impaired (more than 1.5 SD below the mean). Although not depicted, in the overall sample, 61/125 (48.8%) patients passed the WMT based on performance on the easy subtests. Additionally, there was a stepwise progression in failure of the WMT easy subtests across groups, with 82.2% of the Unimpaired group passing, 40% of the MCI group passing, and only 17.1% of the Dementia group passing using manualized cutoffs. Additionally, we also see a fairly consistent worsening of performance on objective delayed memory measures as CDR ratings and WMT failures increase. One-way ANOVA’s with the WMT indices (IR, DR, C, MC, PAR, and FR) as dependent variables across the different classification groups (Unimpaired CDR 0 + 0.5s, Impaired CDR 0.5 s, CDR s, and CDR 2's) with appropriate posthoc comparisons were conducted to test for group differences in the WMT indices. Significant differences between all groups shown in Table 2 were found (F = 15.34, p < .001 for IR, F = 16.51, p < .001 for DR, F = 16.72, p < .001 for C, F = 33.97, p < .001 for MC, F = 27.75, p < .001 for PA, and F = 25.46, p < .001 for FR; df 3, 124). Homogeneity of variance was violated for the IR and DR indices; therefore Dunnett’s C was used for posthoc comparisons of those indices between the groups. A Bonferroni correction for multiple comparisons was used otherwise. The Unimpaired group (Unimpaired 0 + 0.5s) performed significantly better than all other groups (Impaired 0.5s, CDR 1, and CDR 2) on all WMT indices (p < .05, for IR and DR; p < .001 for C, MC, PAR, FR). The Impaired 0.5 group performed better than the CDR 1 group on DR (p < .05) MC, PAR, and FR (p < .02 for all indices). WMT Performance by CDR Classification for Veterans Passing or Failing the WMT Easy Subtests Tables 3 and 4 display our participant groups based on whether or not they failed IR, DR, or C on the WMT. In the group of WMT passers (Table 3) there were significant differences between all but the DR and C indices (F = 2.89, p = .043 for IR, F = 4.83, p = .005 for MC, F = 6.37, p = .001 for PA, and F = 5.02, p = .004 for FR; df 3, 57). Posthoc comparisons using a Bonferroni correction revealed that the group differences in the IR index were no longer significant after correction. On the MC index, the Unimpaired group significantly differed from the CDR 1 group (p = .046). On the PAR index, the Unimpaired group differed significantly from the Impaired 0.5 and CDR 1 groups (p = .024 and p = .004, respectively). On the FR index, the only significant difference after correction for multiple comparisons was the Unimpaired group performing better than the CDR 1 group (p = .004). Table 3. WMT index scores WMT easy subtest passers by CDR rating   CDR 0 (N = 5)  CDR 0.5 (N = 50)  Unimpaired (CDR 0 + 0.5, N = 37)  Impaired 0.5 (N = 18)  CDR 1 (N = 3)  CDR 2 (N = 3)  WMT IR  97.00 (9.58)  95.65 (4.54)  96.01 (4.47)  95.28 (4.36)  90.00 (2.50)  90.00 (9.01)  WMT DR  98.50 (2.24)  95.86 (3.83)  96.15 (4.11)  96.00 (3.08)  95.00 (5.00)  93.33 (7.64)  WMT C  96.50 (2.85)  92.80 (5.82)  93.38 (5.78)  92.64 (5.65)  86.67 (3.82)  91.67 (8.78)  WMT MC  89.00 (14.32)  75.10 (15.00)  79.86 (15.07)a  68.82 (16.54)  53.33 (28.43)  60.83 (10.10)  WMT PAR  87.00 (9.75)  69.80 (16.71)  75.27 (15.98)b  62.94 (16.11)  40.00 (31.23)  61.67 (10.41)  WMT FR  39.00 (13.99)  33.57 (10.80)  35.95 (11.14)a  30.00 (10.11)  12.50 (4.33)  28.75 (1.77)  CDR SOB  0  1.49 (1.33)  1.19 (0.96)  1.69 (1.89)  5.83 (0.58)  9.67 (0.58)  Delayed Memory Z  −0.27 (1.36)  −0.57 (0.87)  −0.15 (0.52)  −1.46 (0.81)  −1.79 (0.79)  −0.77 (0.61)    CDR 0 (N = 5)  CDR 0.5 (N = 50)  Unimpaired (CDR 0 + 0.5, N = 37)  Impaired 0.5 (N = 18)  CDR 1 (N = 3)  CDR 2 (N = 3)  WMT IR  97.00 (9.58)  95.65 (4.54)  96.01 (4.47)  95.28 (4.36)  90.00 (2.50)  90.00 (9.01)  WMT DR  98.50 (2.24)  95.86 (3.83)  96.15 (4.11)  96.00 (3.08)  95.00 (5.00)  93.33 (7.64)  WMT C  96.50 (2.85)  92.80 (5.82)  93.38 (5.78)  92.64 (5.65)  86.67 (3.82)  91.67 (8.78)  WMT MC  89.00 (14.32)  75.10 (15.00)  79.86 (15.07)a  68.82 (16.54)  53.33 (28.43)  60.83 (10.10)  WMT PAR  87.00 (9.75)  69.80 (16.71)  75.27 (15.98)b  62.94 (16.11)  40.00 (31.23)  61.67 (10.41)  WMT FR  39.00 (13.99)  33.57 (10.80)  35.95 (11.14)a  30.00 (10.11)  12.50 (4.33)  28.75 (1.77)  CDR SOB  0  1.49 (1.33)  1.19 (0.96)  1.69 (1.89)  5.83 (0.58)  9.67 (0.58)  Delayed Memory Z  −0.27 (1.36)  −0.57 (0.87)  −0.15 (0.52)  −1.46 (0.81)  −1.79 (0.79)  −0.77 (0.61)  aSignificantly different than CDR 1. bSignificantly different than Impaired 0.5 and CDR 1 groups. Table 4. WMT index scores WMT easy subtest failures by CDR rating   CDR 0.5 (N = 35)  Unimpaired (CDR 0.5, N = 8)  CDR 0.5 (only Impaired, N = 27)  CDR 1 (N = 21)  CDR 2 (N = 8)  WMT IR  71.93 (11.43)  76.25 (10.35)  70.65 (11.59)  70.60 (10.87)  68.44 (18.85)  WMT DR  76.86 (13.95)  78.44 (11.80)  76.39 (14.70)  67.62 (14.26)  63.44 (18.02)  WMT C  68.79 (9.77)  67.19 (10.56)  69.26 (9.678)  64.88 (10.02)  63.13 (8.74)  WMT MC  35.29 (13.28)  42.50 (11.65)a  33.15 (13.17)b  22.14 (14.02)  22.50 (12.25)  WMT PAR  38.01 (16.05)  41.25 (12.46)  37.02 (17.09)  26.45 (15.17)  21.88 (13.61)  WMT FR  16.69 (8.18)  23.13 (8.10)a  14.71 (7.26)  10.83 (11.50)  7.81 (7.25)  CDR SOB  2.19 (1.11)  2.50 (1.11)  2.09 (1.11)  6.48 (1.63)  9.88 (1.13)  Delayed Memory Z  −1.75 (1.06)  −0.24 (0.69)  −2.20 (0.65)  −2.52 (0.81)  −2.62 (1.08)    CDR 0.5 (N = 35)  Unimpaired (CDR 0.5, N = 8)  CDR 0.5 (only Impaired, N = 27)  CDR 1 (N = 21)  CDR 2 (N = 8)  WMT IR  71.93 (11.43)  76.25 (10.35)  70.65 (11.59)  70.60 (10.87)  68.44 (18.85)  WMT DR  76.86 (13.95)  78.44 (11.80)  76.39 (14.70)  67.62 (14.26)  63.44 (18.02)  WMT C  68.79 (9.77)  67.19 (10.56)  69.26 (9.678)  64.88 (10.02)  63.13 (8.74)  WMT MC  35.29 (13.28)  42.50 (11.65)a  33.15 (13.17)b  22.14 (14.02)  22.50 (12.25)  WMT PAR  38.01 (16.05)  41.25 (12.46)  37.02 (17.09)  26.45 (15.17)  21.88 (13.61)  WMT FR  16.69 (8.18)  23.13 (8.10)a  14.71 (7.26)  10.83 (11.50)  7.81 (7.25)  CDR SOB  2.19 (1.11)  2.50 (1.11)  2.09 (1.11)  6.48 (1.63)  9.88 (1.13)  Delayed Memory Z  −1.75 (1.06)  −0.24 (0.69)  −2.20 (0.65)  −2.52 (0.81)  −2.62 (1.08)  aSignificantly different than CDR 1 and CDR 2 groups. bSignificantly different than CDR 1 group. In the WMT failure group (Table 4) all but four subjects obtained the SIP. Please note that four subjects in the WMT failure group were excluded from later analyses due to missing PAR, FR, or both index scores, therefore the presence the SIP was not able to be calculated for these individuals. In the ANOVA analyses, significant differences were seen for all but the IR and C indices (F = 2.85, p = .045 for DR, F = 6.21, p < .001 for MC, F = 3.77, p = .015 for PA, and F = 4.94, p = .004 for FR; df 3, 63). After using a Bonferroni correction in posthoc analyses, none of the classification groups differed significantly on the DR and PAR indices. For the MC index, the Unimpaired group performed better than the CDR 1 and 2 groups (p = .003 and p = .021, respectively). The Impaired CDR 0.5 group also performed better than the CDR 1 group (p = .034). On the FR index, the Unimpaired group performed better than the CDR 1 and 2 groups (p = .011 and p = .006), respectively. Classification Statistics for SIP Predicting Poor Effort As outlined in the participants and procedures section, a chart review was conducted in an attempt to further ascertain the accuracy of the WMT SIP profile in subsequent classification of individuals who initially failed the easy subtests (IR, DR, and C). Table 5 shows the results of this classification. In this table not having a SIP (e.g., WMT Failure = positive test result) would predict poor effort, whereas having the SIP (WMT SIP = negative test result) would suggest failure on the WMT that was likely due to underlying memory impairment. Among the patients who were presumably correctly classified according to WMT performance, there were three patients who did not have the SIP profile who also had corroborating evidence from chart review which would suggest the presence of poor effort (e.g., True Positive for poor effort). In addition, there were 35 patients who had the SIP and who also had corroborating evidence for cognitive impairment (e.g., True Negative for not having poor effort). Among patients incorrectly classified, one patient did not have the SIP, and would be classified as a poor effort patient due to WMT performance, but had corroborating evidence for significant cognitive impairment (i.e., severe impairment with clear evidence of progressive decline per clinical chart review). Lastly, 21 patients had the SIP but there was corroborating evidence for poor effort based on chart review (e.g., False Negative for having poor effort). Overall, data from this sample support a sensitivity and positive predictive value of 12.5% and 75%, respectively. The specificity and negative predictive value in this sample would be 97% and 62.5%, respectively. The prevalence of poor effort in this sample occurs at approximately 40% using PVT performance in conjunction with chart review for corroborating information. Using this information, we calculated posttest probabilities, which represents the probability of detecting poor effort (e.g., poor effort versus SIP [implied cognitive dysfunction]) based on a positive test result (e.g., no SIP) or negative test result (e.g., SIP [implied cognitive dysfunction]). This resulted in a positive posttest probability of poor effort of 73%, based on positive test result, and 37% for a negative test result (see Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000). Table 5. Accuracy of classification determined solely by SIP   Evidence of poor effort  No evidence of poor effort    No SIP pattern  TP = 3  FP = 1  PPV = 3/(1 + 3) ∗ 100 = 75%  SIP pattern  FN = 21  TN = 35  NPV = 35/(35 + 21) ∗ 100 = 62.5%    Sens. = 3/(3 + 21) = 12.5%  Spec. = 35/(35 + 1) = 97.22%      Evidence of poor effort  No evidence of poor effort    No SIP pattern  TP = 3  FP = 1  PPV = 3/(1 + 3) ∗ 100 = 75%  SIP pattern  FN = 21  TN = 35  NPV = 35/(35 + 21) ∗ 100 = 62.5%    Sens. = 3/(3 + 21) = 12.5%  Spec. = 35/(35 + 1) = 97.22%    Note: SIP = severe impairment profile; TP = true positive; FP = false positive; PPV = positive predictive value; FN = false negative; TN = true negative; NPV = negative predictive value; Sens. = sensitivity; Spec. = specificity. Classification Statistics for SIP Predicting Impaired Cognition In addition to providing information about effort, the utilization of the SIP also implies a portion of the sample is likely cognitively impaired. That is, while not having the SIP is meant to describe individuals who are likely to produce unreliable test performance, having the SIP is meant to suggest that the response inconsistencies would be secondary to expected memory deficits consistent with a diagnosis of dementia or other diagnosis capable of producing moderate to severe cognitive impairment. In this sense, a corollary table was created to examine SIP profiles as being related to evidence for cognitive impairment (i.e., the converse of not obtaining the SIP – if the SIP is obtained, then there is evidence for cognitive impairment). Table 6 shows the results of this classification. Among the patients who were correctly classified, there were 35 patients with the SIP profile who also had had corroborating evidence from chart review which would suggest the presence of cognitive impairment (e.g., True Positive for identifying genuine impairment). In addition, there were three patients who did not have the SIP profile and also had corroborating evidence that would not support the presence of genuine impairment (e.g., True Negative for having genuine impairment). Among patients incorrectly classified, 21 patients had the SIP but there was corroborating evidence from chart review which would not support the presence of severe impairment (e.g., False Positive for having genuine impairment). Lastly, one patient did not have the SIP but corroborating evidence from chart review (severe cognitive impairment with clear progressive decline) demonstrated genuine cognitive impairment (e.g., False Negative for predicting genuine impairment). Overall, data from this sample support a sensitivity and positive predictive value of 97% and 62.5%, respectively. The specificity and negative predictive value in this sample would be 12.5% and 75%, respectively. Based on this sample, the cognitive impairment base rate occurs at approximately 63%. Using this information, we calculated posttest probabilities, which represents the probability of detecting cognitive impairment (e.g., SIP [implied cognitive dysfunction] vs. poor effort) based on a positive test result (e.g., having SIP [implied cognitive dysfunction]) or negative test result (e.g., no SIP). This resulted in a positive posttest probability of impairment of 65%, based on positive test result, and 28% for a negative test result. Table 6. Accuracy of classification determined by chart review   Evidence of poor effort  No evidence of poor effort    SIP Pattern  TP = 35  FP = 21  PPV = 35/(35 + 21) ∗ 100 = 62.5%  No SIP Pattern  FN = 1  TN = 3  NPV = 3/(1 + 3) ∗ 100 = 75%    Sens. = 35/(35 + 1) = 97.22%  Spec. = 3/(3 + 21) = 12.5%      Evidence of poor effort  No evidence of poor effort    SIP Pattern  TP = 35  FP = 21  PPV = 35/(35 + 21) ∗ 100 = 62.5%  No SIP Pattern  FN = 1  TN = 3  NPV = 3/(1 + 3) ∗ 100 = 75%    Sens. = 35/(35 + 1) = 97.22%  Spec. = 3/(3 + 21) = 12.5%    Note: SIP = severe impairment profile; TP = true positive; FP = false positive; PPV = positive predictive value; FN = false negative; TN = true negative; NPV = negative predictive value; Sens. = sensitivity; Spec. = specificity. Discussion The current study sought to replicate findings from Green and colleagues (2011) and examined the SIP of the WMT in a sample of older US veterans referred for neuropsychological testing due to memory complaints concerning for dementia. As hypothesized, increasing level of impairment as indicated by CDR scores was associated with a higher likelihood of performing below the recommended cut scores for the easy WMT subtests. For patients who failed the easy subtest items, further characterization of the SIP was made through integration of additional chart and neuropsychological test data with results similar to Axelrod and Schutte (2010). The current study demonstrated high specificity levels (0.97), which is similar to that of Green and colleagues (2011) (0.98); however, this was tempered by a lower positive predictive value (0.75) and poor sensitivity (0.12). The low sensitivity values are important, as 93% of patients (56/60) demonstrated the SIP, but the NPV associated with this classification was 0.62. A primary purpose of a PVT, such as the WMT, is to aid the clinician in distinguishing those who perform suboptimally versus those who do not. This dichotomy regarding poor effort is reasonably made based on PVT performance. In contrast, inferences regarding optimal effort on a continuum, intentionality, or the presence or absence of neurological disease cannot be reasonably made based on PVT performance and generally necessitates additional integration of data from the neuropsychologist. The SIP methodology, as currently proposed by Green and colleagues (2011), appears to move away from this inferential dichotomy and subsequently creates the potential for providers to affirm support for a neurologic diagnosis based on WMT SIP performance. For example, among patients referred for neuropsychological evaluation in the context of possible neurologic disease (e.g., dementia, vascular disease, epilepsy, etc.) a passing WMT performance on the “easy” items results in a determination that there was no evidence for poor effort. In contrast, a failure on the WMT easy items might prompt the neuropsychologist to subsequently examine the SIP to further ascertain effort. In this case the question changes from evidence for “poor effort” versus “no evidence for poor effort” to evidence for “poor effort” (e.g., Easy-Hard subtest average difference <30 points) versus “affirmation of severe impairment” (e.g., SIP as currently conceptualized). While it could be argued that the inference is merely support for “poor effort” versus “unknown,” the mere labeling of the profile as the “SIP” (or GMIP) is laden with meaning and suggests to providers an affirmation of disease state. SIP as an inferential dichotomy of “poor effort” versus “severe impairment” represents a conceptual shift from the original intent of the WMT (e.g., performance on the easy subtests), or forced choice PVTs in general, and appears to be only tenuously be supported by the current study. It is noteworthy that the higher the CDR score, the higher probability for impairments on neuropsychological testing, and the greater likelihood for WMT failure of easy test items. However, of those individuals performing below the cutoff on the WMT with complete data, 93.3% (56/60) displayed the SIP: The SIP almost always predicts WMT profile performance to be consistent with neurological disease. However, when obtaining the SIP is viewed as a positive test for neurologic disease (i.e., as implied by the “poor effort” vs. “genuine impairment dichotomy”) the diagnostic accuracy of identifying “impairment” in the current sample is modest at best (PPV = 62% for identifying “severe impairment,” see Table 6). Establishing a WMT profile score to minimize false positive errors would improve its originally intended purpose as a PVT in a dementia sample; however, the SIP is observed too frequently and is often inconsistent with additional medical information. The practical utility and empirical support for use of the SIP in the current sample is questionable. Ascertaining variable effort in neuropsychological assessment remains a challenging process and the WMT remains one of the most powerful tools available to providers. It has demonstrated consistently excellent psychometric properties in traumatic brain injury samples (e.g., Green, Rohling, Lees-Haley, & Allen, 2001; Iverson, Green, & Gervais, 1999; Proto et al., 2014). The use of the WMT, and other PVTs, in other neurological samples remains fully warranted. However, additional studies regarding the utility of PVT performance among individuals at risk for degenerative disorders should be undertaken. As previously indicated, many PVTs approximate the format of a forced choice memory task and the WMT heavily relies on a recognition memory strategy for successful completion of the easy subtests. Although speculative, it remains possible that WMT test properties may be less reliable for patients with amnestic profiles (e.g., amnestic MCI to AD), as opposed to other neurological populations where mesial temporal memory processes are less directly impaired. For example, it is generally accepted that disparate neurologic substrates mediate dissociable aspects of memory (e.g., recognition versus familiarity; Aggleton & Brown, 2006; but also see Squire, Wixted, & Clark, 2008) and various neurologic disorders are postulated to produce such impairments (e.g., Koen & Yonelinas, 2014). It has also been suggested that intratemporal specialization exists, with tasks high on a “semantico-syntactic” continuum dependent on the anterior portion of the inferior temporal gyrus and to a lesser extent perirhinal activity (Saling, 2009). At this point we believe that additional research in at-risk populations is needed to better characterize PVT failure in older patients, which has been shown to be higher than in younger, non-litigation seeking populations (e.g., Rienstra et al., 2013), and it is possible that population specific cut-scores may be appropriate. The current study has several strengths. First, this study evaluated an older population that is at high risk for failure of PVTs both for genuine memory impairment and due to possible secondary gain factors, leading to a reasonable expectation that some patients would display non-credible performance even with the SIP present. Second, the current study was able to utilize medical records from both before and after the neuropsychological evaluations which created the opportunity to determine if impairment was likely due to a degenerative condition causing genuine memory impairment or if it was related to non-credible performance due to possible secondary gain. An additional strength of the study is the use of patient evaluations conducted as part of routine clinical care, while screening out individuals in which a degenerative condition would have been atypical (e.g., under the age of 55). Last, this study was able to utilize various methods for the judgment of impairment in the presence of possible dementia through the use of trained CDR raters, objective memory performance, and performance on the WMT. The current study has several limitations. The data utilized did not include detailed diagnostic information which would have allowed further exploration of PVT failures in relation to expected deficit patterns. Another limitation is that the current study utilizes data collected as part of routine clinical care and did not use a standardized battery of tests. Such a standardized procedure would have allowed for closer inspection of objective performance as it relates to failed effort indicators and SIP. Additionally, although the use of corroborating evidence from chart review adds to the literature, it may be helpful for future research to more tightly define and operationalize the variables included in chart review to minimize the possible subjectivity of such reviews. In conclusion, the results of the current study suggest that the context of the evaluation is important in the utilization of PVTs and interpretation of cutoffs used to determine genuine memory impairment from non-credible performance in dementia at risk populations. Whereas, in theory, a SIP would be useful if it accurately identified those performing suboptimally due to inadequate effort, the current iteration of the SIP appears to identify almost all patients who fail the WMT easy subtests as genuinely impaired (even when other clinical evidence would suggest that some of those patients likely do not have severe memory impairment) which limits the clinical utility of the profile. Broader concern is also expressed regarding the SIP, as the SIP itself represents a conceptual shift wherein the SIP may be used to affirm disease state. Conflict of Interest None declared. References Aggleton, A. P., & Brown, M. W. ( 2006). Interleaving brain systems for episodic and recognition memory. Trends in cognitive sciences , 10, 455– 463. Google Scholar CrossRef Search ADS PubMed  Armistead-Jehle, P. ( 2010). Symptom Validity Test performance in US Veterans referred for evaluation of mild TBI. Applied Neuropsychology , 17, 52– 59. Google Scholar CrossRef Search ADS PubMed  Axelrod, B. N., & Schutte, C. ( 2010). Analysis of the dementia profile on the Medical Symptom Validity Test. The Clinical Neuropsychologist , 24, 873– 881. Google Scholar CrossRef Search ADS PubMed  Bush, S. S., & Morgan, J. E. ( 2012). Improbable neuropsychological presentations: Assessment, diagnosis, and management. In S. S. Bush (Ed.), Neuropsychological practice with veterans  (pp. 27– 44). New York: Springer Publishing Company. Brandt, J., & Benedict, R. ( 2001). Hopkins Verbal Learning Test – Revised (HVLT-R) . Lutz, FL: PAR. Bush, S. S., Ruff, R. M., Troster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., et al.  . ( 2005). Symptom Validity Assessment: Practice issues and medical necessity: NAN Policy and Planning Committee. Archives of Clinical Neuropsychology , 20, 419– 426. Google Scholar CrossRef Search ADS PubMed  Clark, L. R., Stricker, N. H., Libon, D. J., Delano-Wood, L., Salmon, D. P., Delis, D. C., et al.  . ( 2012). Yes/No forced choice recognition memory in mild cognitive impairment and Alzheimer’s disease: Patterns of impairment and associations with dementia severity. Clinical Neuropsychology , 26, 1201– 1216. Google Scholar CrossRef Search ADS   Dean, A. C., Victor, T. L., Boone, K. B., Philpott, L. M., & Hess, R. A. ( 2009). Dementia and effort test performance. The Clinical Neuropsychologist , 23, 133– 152. Google Scholar CrossRef Search ADS PubMed  Delis, D., Kramer, J. H., Kaplan, E., & Ober, B. ( 2000). The California Verbal Learning Test: Adult Version . San Antonio, TX: The Psychological Corporation. Dunnett, C. W. ( 1980). Pair wise multiple comparisons in the unequal variance case. Journal of the American Statistical Association , 75, 796– 800. Google Scholar CrossRef Search ADS   Green, P., Lees-Haley, P. R., & Allen, L. M. ( 2002). The Word Memory Test and the Validity of Neuropsychological Test Scores. Forensic Neuropsychology , 2, 97– 124. Google Scholar CrossRef Search ADS   Green, P. ( 2003). Green’s Word memory Test for Microsoft Windows: User’s manual . Edmonton, Canada: Green’s Publishing. Green, P. ( 2004). Manual for the Medical Symptom Validity Test . Edmonton, Canada: Green’s Publishing. Green, P., Rohling, M. L., Lees-Haley, P. R., & Allen, L. M. ( 2001). Effort has a greater effect on test scores than severe brain injury in compensation claimants. Brain Injury , 15, 1045– 1060. Google Scholar CrossRef Search ADS PubMed  Green, P., Flaro, L., & Courtney, J. ( 2009). Examining false positives on the Word memory Test in adults with mild traumatic brain injury. Brain Injury , 23, 741– 750. Google Scholar CrossRef Search ADS PubMed  Green, P., Montijo, J., & Brockhaus, R. ( 2011). High specificity of the Word Memory Test and the Medical Symptom Validity Test in groups with severe verbal memory impairment. Applied Neuropsychology , 18, 86– 94. Google Scholar CrossRef Search ADS PubMed  Greve, K. W., Ord, J., Curtis, K. L., Bianchini, K. J., & Brennan, A. ( 2008). Detecting malingering in traumatic brain injury and chronic pain: A comparison of three forced-choice symptom validity tests. The Clinical Neuropsychologist , 22, 896– 918. Google Scholar CrossRef Search ADS PubMed  Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., & Millis, S. R., Conference Participants. ( 2009). American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist , 23, 1093– 1129. Google Scholar CrossRef Search ADS PubMed  Howe, L., & Loring, D. ( 2009). Classification accuracy and predictive ability of the Medical Symptom Validity Test’s dementia profile and genuine memory impairment profile. The Clinical Neuropsychologist , 23, 329– 342. Google Scholar CrossRef Search ADS PubMed  Iverson, G., Green, P., & Gervais, R. ( 1999). Using the Word Memory Test to detect biased responding in head injury litigation. The Journal of Cognitive Rehabilitation , 17, 4– 8. Kiewel, N. A., Wisdom, N. M., Bradshaw, M. R., Pastorek, N. J., & Strutt, A. M. ( 2012). A retrospective review of Digit Span-related effort indicators in probable Alzheimer’s disease patients. The Clinical Neuropsychologist , 26, 965– 974. Google Scholar CrossRef Search ADS PubMed  Koen, J. D., & Yonelinas, A. P. ( 2014). The effects of healthy aging, amnestic mild cognitive impairment, and Alzheimer’s disease on recollection and familiarity: A meta-analytic review. Neuropsychology Review , 24, 332– 354. Google Scholar CrossRef Search ADS PubMed  Morris, J. C. ( 1993). The clinical dementia rating scale (CDR): Current version and scoring rules. Neurology , 43, 2412– 2414. Google Scholar CrossRef Search ADS PubMed  Morris, J. C., Ernesto, C., Schafer, K., Coats, M., Leon, S., Sano, M., et al.  . ( 1997). Clinical dementia rating training and reliability in multicenter studies: The Alzheimer’s Disease Cooperative Study experience. Neurology , 48, 1508– 1510. Google Scholar CrossRef Search ADS PubMed  Proto, D. A., Pastorek, N. J., Miller, B. I., Romesser, J. M., Sim, A. H., & Linck, J. F. ( 2014). The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms. Archives of Clinical Neuropsychology , 29, 614– 624. Google Scholar CrossRef Search ADS PubMed  Randolph, C. ( 1998). Repeatable battery for the assessment of neuropsychological status (RBANS) . San Antonio, TX: Harcourt, The Psychological Corporation. Rienstra, A., Groot, P. F., Spaan, P. E., Majoie, C. B., Nederveen, A. J., Walstra, G. J., et al.  . ( 2013). Symptom validity testing in memory clinics: Hippocampal-memory associations and relevance for diagnosing mild cognitive impairment. Journal of Clinical and Experimental Neuropsychology , 35, 59– 70. Google Scholar CrossRef Search ADS PubMed  Rienstra, A., Twennaar, M. K., & Schmand, B. ( 2013). Neuropsychological characterization of patients with the WMT dementia profile. Archives of Clinical Neuropsychology , 28, 463– 475. Google Scholar CrossRef Search ADS PubMed  Sackett, D., Straus, S., Richardson, W. S., Rosenberg, W., & Haynes, R. B. ( 2000). Evidence-based medicine: How to practice and teach EBM  ( 2nd ed.). St. Louis: Churchill Livingston. Saling, M. M. ( 2009). Verbal memory in mesial temporal lobe epilepsy: Beyond material specificity. Brain , 132, 570– 582. Google Scholar CrossRef Search ADS PubMed  Singhal, A., Green, P., Ashaye, K., & Gill, D. ( 2009). High specificity of the Medical Symptom Validy Test in patients with very severe memory impairment. Archives of Clinical Neuropsychology , 24, 721– 728. Google Scholar CrossRef Search ADS PubMed  Squire, L. R., Wixted, J. T., & Clark, R. E. ( 2008). Recognition memory and the medial temporal lobe: A new perspective. Nature Reviews Neuroscience , 8, 872– 883. Google Scholar CrossRef Search ADS   Teichner, G., & Wagner, M. T. ( 2004). The Test of Memory Malingering (TOMM): Normative data from cognitively intact, cognitively impaired, and elderly patients with dementia. Archives of Clinical Neuropsychology , 19, 455– 464. Google Scholar CrossRef Search ADS PubMed  Tombaugh, T. N. ( 1996). TOMM: Test of Memory Malingering Manual . North Tonowanda, NY: Multi-Health System, Inc. Vickery, C. D., Berry, D. T., Inman, T. H., Harris, M. J., & Orey, S. A. ( 2001). Detection of inadequate effort on neuropsychological testing: A meta-analytic review of selected procedures. Archives of Clinical Neuropsychology , 16, 45– 73. Google Scholar PubMed  Wechsler, D. ( 2009). Wechsler Memory Scale-Fourth Edition (WMS-IV) . San Antonio, TX: Pearson Assessment. Whearty, K. M., Allen, D. N., Lee, B. G., & Strauss, G. P. ( 2015). The evaluation of insufficient cognitive effort in schizophrenia in light of low IQ scores. Journal of Psychiatric Research , 68, 397– 404. Google Scholar CrossRef Search ADS PubMed  Young, J. C., Kearns, L. A., & Roper, B. L. ( 2011). Validation of the MMPI-2 Response Bias Scale and Henry-Heilbronner Index in a U.S. veteran population. Archives of Clinical Neuropsychology , 26, 194– 204. Google Scholar CrossRef Search ADS PubMed  Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

Journal

Archives of Clinical NeuropsychologyOxford University Press

Published: Mar 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off