Reliable Digit Span: Does it Adequately Measure Suboptimal Effort in an Adult Epilepsy Population?

Reliable Digit Span: Does it Adequately Measure Suboptimal Effort in an Adult Epilepsy Population? Abstract Objective Assessment of performance validity is a necessary component of any neuropsychological evaluation. Prior research has shown that cutoff scores of ≤6 or ≤7 on Reliable Digit Span (RDS) can detect suboptimal effort across numerous adult clinical populations; however, these scores have not been validated for that purpose in an adult epilepsy population. This investigation aims to determine whether these previously established RDS cutoff scores could detect suboptimal effort in adults with epilepsy. Method Sixty-three clinically referred adults with a diagnosis of epilepsy or suspected seizures were administered the Digit Span subtest of the Wechsler Adult Intelligence Scale (WAIS-III or WAIS-IV). Most participants (98%) passed Trial 2 of the Test of Memory Malingering (TOMM), achieving a score of ≥45. Results Previously established cutoff scores of ≤6 and ≤7 on RDS yielded a specificity rate of 85% and 77% respectively. Findings also revealed that RDS scores were positively related to attention and intellectual functioning. Given the less than ideal specificity rate associated with each of these cutoff scores, together with their strong association to cognitive factors, secondary analyses were conducted to identify more optimal cutoff scores. Preliminary results suggest that an RDS cutoff score of ≤4 may be more appropriate in a clinically referred adult epilepsy population with a low average IQ or lower. Conclusions Preliminary findings indicate that cutoff scores of ≤6 and ≤7 on RDS are not appropriate in adults with epilepsy, especially in individuals with low average IQ or below. Performance validity test, Suboptimal effort, Epilepsy, Reliable digit span, Test of Memory Malingering Introduction Performance validity testing is an integral part of any neuropsychological evaluation, particularly in a forensic setting or when there is a question of secondary gain (Axelrod, Fichtenberg, Millis, & Wertheimer, 2006; Lee, Loring, & Martin, 1992). Still, suboptimal effort can occur for a multitude of other reasons as well, including psychiatric conditions, inattentiveness, poor motivation, and fatigue (Hunt, Ferrara, Miller, & Macciocchi, 2007; Lange, Pancholi, Bhagwat, Anderson-Barnes, & French, 2012). Accordingly, the National Academy of Neuropsychology (Bush et al., 2005) and the American Academy of Clinical Neuropsychology (Heilbronner et al., 2009) have issued strong statements urging neuropsychologists to incorporate performance validity tests (PVTs) in every evaluation, regardless of the referral question (Schroeder, Twumasi-Ankrah, Baade, & Marshall, 2012; Sharland & Gfeller, 2007), as these instruments provide insight into whether an examinee’s scores are likely reflective of their actual level of cognitive functioning or the result of them not putting forth maximum effort (Constantinou, Bauer, Ashendorf, Fisher, & McCaffrey, 2005). PVTs are classified as either stand-alone or embedded measures (Gunner, Miele, Lynch, & McCaffrey, 2012; Hilsabeck, Gordon, Hietpas-Wilson, & Zartman, 2011). While stand-alone PVTs are designed to specifically assess effort, embedded PVTs are derived secondary to the instrument’s own clinical utility (Heilbronner et al., 2009). Particular strengths of embedded PVTs are that they provide clinicians with multiple opportunities to monitor engagement without having to lengthen testing (Curtis, Greve, Bianchini, & Brennan, 2006; Gunner et al., 2012; Mathias, Greve, Bianchini, Houston, & Crouch, 2002). Conversely, a drawback of embedded PVTs is that their cutoff scores are derived from instruments that also measure cognitive functioning. Thus, poor performance on these measures can also occur as a result of genuine cognitive impairment (Dean, Victor, Boone, & Arnold, 2008; Slick, Sherman, & Iverson, 1999). Accordingly, practitioners need to select embedded PVTs, as well as cutoff scores, in which they can be highly confident that poor performance is not linked to genuine cognitive impairment, such as low intellectual functioning (Axelrod et al., 2006; Bianchini, Mathias, Greve, Houston, & Crouch, 2001). Prior work has shown that IQ and performance on PVTs (e.g., Rey 15-Item, Test of Memory Malingering, and Word Memory Test) are significantly related, such that lower IQ adversely impacts performance on PVTs (see Dean et al., 2008 for a review). Ideally, PVTs should demonstrate strong sensitivity and specificity (Greve & Bianchini, 2004). However, given that these variables are often inversely related, this is not always possible. In such situations, practitioners are encouraged to prioritize specificity, with extreme measures being taken to limit the number of false positives (Greve & Bianchini, 2004). Additionally, given that base rates for PVT failures differ across clinical groups (Hilsabeck et al., 2011; Mittenberg, Patton, Canyock, & Condit, 2002), cutoff scores must be independently validated in a given population, with the general rule of thumb being that 10% of any given population should be identified as putting forth suboptimal effort (Boone, 2007; Dean et al., 2008). The Digit Span subtest has received a lot of attention in the PVT literature (Boone, 2007). Prior research demonstrates that various components of the Digit Span subtest (Babikian, Boone, Lu, & Arnold, 2006; Heinly, Greve, Bianchini, Love, & Brennan, 2005), including reliable digit span (RDS; Babikian et al., 2006; Larrabee, 2003; Schroeder et al., 2012), can detect suboptimal effort in many adult clinical populations, such as those with toxic exposure (Greve et al., 2007), attention-deficit/hyperactivity disorder (Marshall et al., 2010), traumatic brain injury (Heinly et al., 2005; Mathias et al., 2002), and psychotic disorders (Schroeder & Marshall, 2011). Indeed, RDS is among the most commonly used embedded PVTs in neuropsychology (Boone, 2007) and is calculated by adding the longest strings of digits forward and digits backward in which both trials were passed (Greiffenstein, Baker, & Gola, 1994). To date, few studies have investigated whether the Digit Span subtest is a valid PVT in adults with epilepsy (Iverson & Tulsky, 2003; Loring, Lee, & Meador, 2005). Additionally, no study has examined the RDS component of the Digit Span subtest within this population. Historically, RDS cutoff scores of ≤6 and ≤7 have been used to detect suboptimal effort across many adult populations because they are typically associated with a 90% pass rate (Schroeder et al., 2012). Nevertheless, some have cautioned against the use of such high cutoff scores, as they generate high false positive rates (i.e., less than a 90% pass rate) in several adult clinical populations, including those with intellectual disability (ID), cerebrovascular accidents, severe memory disorders, and non-English native speakers (Dean et al., 2008; Heinly et al., 2005; Salazar, Lu, Wen, Boone, & Boone, 2007). As such, these researchers have suggested that a lower cutoff score be used within these clinical populations (Schroeder et al., 2012). Still, these cutoff scores (≤6, ≤7) have not been validated in adults with epilepsy. Welsh, Bender, Whitman, Vasserman, and MacAllister (2012) demonstrate that it is not appropriate to readily apply cutoff scores that have been validated in other clinical populations to individuals with epilepsy (Welsh et al., 2012). In their study, which examined RDS in a pediatric epilepsy sample, an established RDS cutoff score of ≤6 yielded a failure rate of 65%, far below the recommended criteria, which stipulates that a PVT should yield a 90% pass rate (Boone, 2007; Dean et al., 2008). Further, when comparing rates of failures on the Test of Memory Malingering (TOMM; Tombaugh, 1996) and RDS, this cutoff score generated a 71% specificity value. Lastly, they showed that RDS scores were positively correlated with IQ, suggesting that RDS is not a completely independent measure of effort. Welsh and colleagues (2012) provide preliminarily evidence that a lower cutoff score than ≤6 should be used in an adult epilepsy population, as these individuals often exhibit cognitive deficits. Specifically, it has been reported that up to a third of adults with epilepsy report cognitive complaints (Mojs, Gajewska, Głowacka, & Samborski, 2007). While a wide range of cognitive factors has been found to be impaired within this population, the most frequently reported deficits are memory, processing speed, and attention (Aldenkamp, 2006; Schubert, 2005). Additionally, general intellectual functioning, as measured by IQ, has been found to be compromised within this clinical population (Baxendale, McGrath, & Thompson, 2014). Lastly, seizure factors have been shown to impact the pattern and extent of cognitive impairments, including seizure etiology, seizure type, seizure frequency, and age of seizure onset (Elger, Helmstaedter, & Kurthen, 2004). The current study has several goals. The primary purpose is to evaluate the specificity of RDS using the most cited RDS cutoff scores, ≤6 and ≤7, and compare failure rates on RDS with those of Trial 2 of the TOMM (Tombaugh, 1996) in an adult epilepsy population. For this comparison, a cutoff score of <45 on Trial 2 of the TOMM will be used, as recommended by O'Bryant, Engel, Kleiner, Vasterling, and Black (2007) and Hilsabeck and colleagues (2011). As mentioned, the general rule of thumb is that 10% of any given population should be identified as putting forth suboptimal effort unless established base rates exist for PVT failures in a particular population. Another goal of this study is to examine the relationship between RDS and overall intellectual functioning, as prior work (Welsh et al., 2012) has shown that RDS is related to IQ in children with epilepsy. A related goal is to examine the relationship between attention and RDS. The reason being that RDS cutoff scores are derived from the Digit Span subtest of the Wechsler instruments, which is a measure of attention and working memory. If such a relationship emerges, it would further suggest that RDS is not an independent measure of effort, rather, it is also linked to deficits in attention and working memory. Finally, given that it was expected that RDS scores would be significantly related to IQ, and that a cutoff score of ≤6 and ≤7 would yield a lower than acceptable pass rate for our sample (i.e., less than 90% would pass the RDS), an exploratory goal of this study is to identify a more optimal cutoff score for individuals with epilepsy that have a low IQ. To accomplish this latter goal, we will again compare failure rates on RDS with those of Trial 2 of the TOMM (Tombaugh, 1996), using a cutoff score of <45. However, given that Trial 2 of the TOMM has been shown to be slightly less sensitive than other measures (Green, 2011), it is not expected that failure rates on RDS will be identical to those of Trial 2 of the TOMM. Instead, we will use the general rule of thumb that 10% of a clinical population should fail a given PVT (Boone, 2007), together with our sample’s “actual” failure rates on Trial 2 of the TOMM, to inform our decision as to what is a more optimal cutoff score. Methods Participants This retrospective study consisted of 63 clinically referred adults with a diagnosis of epilepsy or suspected seizures who underwent neuropsychological testing at the NYU Comprehensive Epilepsy Center as part of their routine clinical care, conducted on either an inpatient or outpatient basis. As part of this comprehensive neuropsychological evaluation, all participants completed the TOMM, the Digit Span Subtest of one of the Wechsler instruments, the Coding and List Learning subtests of the Repeatable Battery for the Assessment of the Neuropsychological Status (RBANS; Randolph, Tierney, Mohr, & Chase, 1998), and a Wechsler Scale of estimated IQ. The primary inclusion criteria for this study included a confirmed diagnosis of epilepsy or suspected seizures. A diagnosis of epilepsy or suspected seizures was rendered by a neurologist/epileptologist via neurological workup that mostly included video electroencephalography (98%) and a clinical interview, as suggested by the International League Against Epilepsy (Fisher et al., 2014). In only one instance, was an epilepsy diagnosis made via a routine EEG. When possible, EEG established the nature of participants’ seizures (i.e., focal, generalized, mixed), as well as the lateralization and localization of seizure onset. Additionally, magnetic resonance imaging (MRI) findings were available for most of our sample. Epilepsy severity variables, including age of seizure onset and number of antiepileptic drugs (AEDs) were also recorded. Lastly, English had to be participants’ primary language. Procedures All participants were administered the Digit Span subtest of the Wechsler Adult Intelligence Scale (WAIS), either the third edition (Wechsler, 1997) or the fourth edition (Wechsler, 2008), TOMM (Tombaugh, 1996), and a Wechsler measure of intelligence (see below for a full breakdown of Wechsler measures) for an estimate of full-scale IQ, as part of a comprehensive neuropsychological assessment. Additionally, to assess attention, participants were administered the Coding and List Learning subtests of the RBANS (Randolph et al., 1998). In instances that participants obtained a score of ≥45 on Trial 1 of the TOMM, Trial 2 of the TOMM was not given, as recommended by O'Bryant and colleagues (2007) and Hilsabeck and colleagues (2011), and their Trial 1 score was counted for Trial 2. All administration was standardized according to each instrument’s manual. Measures Reliable Digit Span The primary dependent measure in this study is RDS. As mentioned, RDS is an embedded measure that is obtained from the Digit Span subtest and is calculated by summing the maximum number of digits forward with the maximum number of digits reversed when both trials have been passed (Greiffenstein et al., 1994). Prior research (seeIntroduction) has shown that using a cutoff score of ≤6 and ≤7 to detect suboptimal effort is appropriate for many adult clinical populations, although some have cautioned against the use of such high cutoff scores in those individuals who have ID, cerebrovascular accidents, severe memory disorders, and non-English native speakers (Greve et al., 2007; Heinly et al., 2005; Marshall et al., 2010; Mathias et al., 2002; Schroeder & Marshall, 2011; Schroeder et al., 2012). Lastly, although the factor structure differs for backwards digit span on the WAIS-III and WAIS-IV, research has shown that RDS cutoff scores of ≤6 and ≤7 are appropriate for both versions of the WAIS (Schroeder et al., 2012). Test of Memory Malingering The TOMM (Tombaugh, 1996) is one of the most popular and validated PVTs in the field of neuropsychology (Sharland & Gfeller, 2007; Slick, Tan, Strauss, & Hultsch, 2004). A cutoff score of <45 on Trial 1 of the TOMM has routinely demonstrated high rates of sensitivity and specificity across the most cognitively impaired clinical populations, providing strong evidence that it measures performance validity rather than cognition (Duncan, 2005; Greve, Bianchini, & Doane, 2006; Rees, Tombaugh, Gansler, & Moczynski, 1998; Teichner & Wagner, 2004; Tombaugh, 1997; Weinborn, Orr, Woods, Conover, & Feix, 2003). The TOMM is a 50-item visual recognition task, which consists of two learning trials and an optional retention trial. Participants are first presented with pictures of everyday objects for 3 s, after which they are asked to choose between a foil image and the correct image. Prior research has demonstrated that both trials of the TOMM can identify poor effort in an adult epilepsy population (Cragar, Berry, Fakhoury, Cibula, & Schmitt, 2006; Wisdom, Brown, Chen, & Collins, 2012), although Trial 2 of the TOMM has been proven to be a more clinically valid PVT (see previous citations). Intellectual Assessment The third goal of this study is to examine the relations between RDS scores and intellectual functioning, and, thus, full-scale IQ (FSIQ) was measured using several different variations of the Wechsler intelligence tests. The majority of participants (91%) received the four-subtest version of the Wechsler Abbreviated Scale of Intelligence 2nd version (WASI-2; Wechsler, 2011), with the remaining percentage completing either the Wechsler Adult Intelligence Scale 3rd version (WAIS-III; Wechsler, 1997) or Wechsler Adult Intelligence Scale 4th Version (WAIS-IV; Wechsler, 2008). The WAIS-IV manual (Wechsler, 2008) outlines that the WAIS-IV FSIQ is highly correlated (.92) with the WASI-II four-subtest FSIQ. Additionally, prior research has demonstrated that these Wechsler instruments are highly correlated (Axelrod, 2002), and, thus, it was deemed appropriate to collapse full-scale IQ scores across these different instruments into a single column. IQ was examined as both a numerical and categorical variable. When analyzed as a categorical variable, the following categories were used to classify IQ: (A) average IQ and above: Standard Score ≥90 and (B) low average IQ and below: Standard Score ≤89. Repeatable Battery for the Assessment of Neuropsychological Status The RBANS (Randolph et al., 1998) is a brief cognitive screener that is commonly used in adult neuropsychological evaluations for the purposes of assessing a wide range of cognitive domains, including immediate memory, visuospatial/constructional abilities, language, attention, and delayed memory. Performance for each of these indices can be broken down into a number of subscales. In considering the aims of this study, the List Learning and Coding subtests of the RBANS were utilized, as prior research has shown that these subtests correlate significantly with measures of attention (McKay, Casey, Wertheimer, & Fichtenberg, 2007). Raw scores were recorded for each of these scales. Finally, the Digit Span subtest of the RBANS was not examined in this study, given its almost identical relationship to the Digit Span subtest of the Wechsler instruments. Statistical Plan Descriptive statistics of demographic variables (e.g. age, race, and ethnicity), seizure- variables (e.g. age of seizure onset, type of epilepsy) and IQ was performed. Basic descriptive statistics were also gathered for performance on Trial 1 of the TOMM, Trial 2 of the TOMM, and RDS. Then, using previously established cutoff scores of ≤6 and ≤7 for RDS and <45 for Trial 2 of the TOMM, failure rates for both RDS and TOMM were established. Subsequently, specificity rates were calculated for each of these cutoff scores on RDS using failure rates on Trial 2 of the TOMM as the pass/fail criteria. Given that it was expected that RDS scores would be significantly related to IQ, specificity rates were calculated for the entirety of the sample, as well as for each IQ group (i.e. average IQ and above; low average IQ and below). Also anticipated was the fact that the above cutoff scores would generate a less than acceptable pass rate for our sample (i.e. 90%; Boone, 2007; Dean et al., 2008), especially for participants with low average IQ and below; thus, we proceeded to identify a more optimal RDS cutoff score for each IQ group as a secondary aim of this study. Specifically, we aimed to identify a cutoff score that would result in a pass rate that would be like that of Trial 2 of the TOMM, which was expected to be around the recommended 90%. Prior to conducting failure rates for each IQ group, Pearson correlations were carried out to examine the extent to which RDS and TOMM scores were related to IQ and attention within our sample. When the assumption of normality was violated, Spearman correlation coefficients were used instead. This provided the basis for examining failure rates on RDS across the different IQ groups. Results This retrospective study consisted of 63 clinically referred adults with a diagnosis of epilepsy or suspected seizures. Participants ranged in age from 18 to 75 (M = 39.90 years, SD = 16.13 years), and the estimated IQ for the entirety of our sample was in the average range (M = 94.60; SD = 16.91; range = 63–137). Related, 40 participants had average IQ and above, and 23 participants had low average IQ and below. Most participants were females (55%) and right-handed (86%). Consistent with the demographics serviced at our center, the majority of participants were Caucasian (78%), with the remaining being African-American (6%), Asian/Pacific Islander (5%), Latino (6%), and other/unknown (5%). Regarding seizure type, most participants presented with focal epilepsy (70%), with slightly more left-lateralized profiles (43%) than right-lateralized profiles (37%). On average, seizure onset occurred in middle adulthood (M = 27.00 years, SD = 18.73 years). A total of 33 (52%) participants had documented MRI abnormalities. The remaining had either normal neuroimaging or neuroimaging data was unavailable. See Table 1 for a full breakdown of our sample’s characteristics. Table 1. Patient characteristics (N= 63) Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Note: AEDs = antiepileptic drugs. Table 1. Patient characteristics (N= 63) Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Note: AEDs = antiepileptic drugs. For our sample, mean scores on Trial 1 of the TOMM were 45.98 (SD = 4.31; range = 32–50) and mean scores on Trial 2 of the TOMM were 48.16 (SD = 1.85; range: 44–50). Additionally, 65% of our sample passed Trial 1 of the TOMM. Unique to our sample is the fact that with the exception of one participant every individual obtained a score equal to or better than a 45 (i.e. the recommended cutoff score) for Trial 2 of the TOMM. That is, only one participant within our sample failed Trial 2 of the TOMM. Correlational analyses revealed that scores on Trial 1 and Trial 2 of the TOMM were not significantly correlated with age and IQ. Spearman correlations were used for Trial 1 and Trial 2 of the TOMM, as these variables violated the assumption of normality. With regard to RDS scores, performance ranged from 4 to 13, with the average for our sample being 8.62 (SD = 1.98). Previously established cutoff scores of ≤6 and ≤7 yielded a lower than acceptable specificity rate (≤6 = 85%; ≤7 = 77%), which corresponded to a false positive rate of 14–22% respectively. Nonetheless, a cutoff score of ≤5 classified a more acceptable false positive rate (≤5 = 10%), which was equivalent to a specificity rate of 90%. Furthermore, RDS was not significantly correlated with TOMM Trial 1 scores (r = −.10, p = .43) or TOMM Trial 2 scores (r = −.09, p = .51; see Table 3). Conversely, RDS scores were significantly positively correlated with IQ (r = .44, p < .01), List Learning (r = .41, p = .01), and Coding (r = .56, p < .01), providing evidence that RDS is not an independent measure of effort, rather, it is also linked to deficits in attention and intellectual functioning (see Table 2). Accordingly, specificity rates were examined independently for each IQ group as a secondary aim of this study. After further review, it was revealed that there were substantial differences in the failure and false positive rates for the different IQ groups. Specifically, a cutoff score of ≤7 yielded an unacceptably high false positive rate (13%) for adults with epilepsy that had average IQ and above. Conversely, a cutoff score of ≤6 produced a more acceptable false positive rate (8%) and specificity rate (92%) for this IQ group. For individuals with low average IQ and below, even a cutoff score of ≤5 was inappropriate, as it resulted in a high false positive rate (22%). Instead, a cutoff score of ≤4 yielded more appropriate results, as this cutoff score resulted in fewer false positive rate (4%) and higher specificity (96%). See Table 3 for the corresponding false positive rates and specificity rates by IQ group. Table 2. Spearman correlations between RDS, TOMM Trial 1, TOMM Trial 2, IQ, RBANS Coding, and RBANS List Learning   TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01    TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01  Notes: TOMM = Test of Memory Malingering; RBANS = Repeatable Battery for the Assessment of Neuropsychological Status; RDS = Reliable Digit Span; aSpearman Rho coefficients; *Correlation significant at p< .01 levels. Table 2. Spearman correlations between RDS, TOMM Trial 1, TOMM Trial 2, IQ, RBANS Coding, and RBANS List Learning   TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01    TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01  Notes: TOMM = Test of Memory Malingering; RBANS = Repeatable Battery for the Assessment of Neuropsychological Status; RDS = Reliable Digit Span; aSpearman Rho coefficients; *Correlation significant at p< .01 levels. Table 3. Specificity and false positive rates for various RDS cutoffs in comparison with TOMM by IQ Group RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  Notes: RDS = Reliable Digit Span; TOMM = Test of Memory Malingering. Table 3. Specificity and false positive rates for various RDS cutoffs in comparison with TOMM by IQ Group RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  Notes: RDS = Reliable Digit Span; TOMM = Test of Memory Malingering. Discussion RDS is a widely used embedded PVT in adult clinical and forensic populations (Boone, 2007). Prior work has shown that RDS cutoff scores of ≤6 and ≤7 can detect suboptimal effort across numerous adult clinical populations (Schroeder et al., 2012). However, some have cautioned against the use of such high cutoff scores in adults with ID, cerebrovascular accidents, severe memory disorders, and non-English native speakers (Dean et al., 2008; Heinly et al., 2005; Salazar et al., 2007). Still, to date, no study has validated these previously established cutoff scores (≤6, ≤7) in adults with epilepsy. This was the primary purpose of this investigation. Using previously established RDS cutoff scores of ≤6 and ≤7 yielded unacceptably low specificity rates (85%; 77%) for our sample of adults with epilepsy. Findings also revealed that RDS scores were significantly positively correlated with measures of IQ, as well as with those related to attention. These findings are to be expected, namely that previously established cutoff scores would generate low specificity rates in adults with epilepsy, given that this clinical population has known cognitive deficits and RDS has been shown to be related to intellectual functioning (Aldenkamp & Bodde, 2005; McCagh, Fisk, & Baker, 2009). This study’s results are also consistent with those described by Welsh and colleagues (2012), which reported similar findings in a pediatric epilepsy sample. Given that it was expected that cutoff scores of ≤6 and ≤7 would yield a lower than acceptable failure rate, a secondary and exploratory goal of this study was to establish more optimal cutoff scores for adults with epilepsy. Given that RDS has been shown to be related to IQ, it was determined that different cutoff scores would need to be established for different IQ groups within our sample. Preliminary findings suggest that whereas a RDS cutoff score of ≤6 is most appropriate for individuals with epilepsy who have average IQ and above (specificity = 92%), this cutoff score is not appropriate for adults with epilepsy who have low average IQ and below (specificity = 73%). Instead, a RDS cutoff score of ≤4 appears to be more appropriate (specificity = 96%). It should be mentioned that while normally a failure rate of significantly less than 10% for a PVT would suggest that a recommended cutoff score is not sensitive to suboptimal effort, this does not appear to be the case for our sample, given that only one of our participants failed Trial 2 of the TOMM. Thus, it was expected that an optimal RDS cutoff score would classify significantly less than the “expected” 10%. This study provides preliminary evidence that clinicians must be cautious when employing previously established RDS cutoff scores of ≤6 and ≤7 for adults with epilepsy, particularly for those with low average IQ and below. Additionally, our results provide additional support that PVT cutoff scores cannot be universally applied, and, that cutoff scores must be independently validated for each clinical population (Welsh et al., 2012). This study does not come without limitations. First, the sample size used in this study was relatively small. Related, we were unable to consider potential gender differences, as well as the potential influence of co-morbid disorders. This latter limitation is particularly important as epilepsy rarely occurs in isolation, and other conditions, such as psychiatric disorders, are known to impact performance on PVTs. As such, it is possible that specificity rates related to RDS cutoff scores may vary depending on the presence of co-morbid disorders. Notwithstanding with limitation, it should be mentioned that 98% of our sample passed Trial 2 of the TOMM, suggesting that the presence of co-morbid disorders did not influence RDS performance within our sample. It has also been stated that a RDS cutoff score of ≤6 appears to be the most appropriate for adults with epilepsy who have average IQ and above, and a RDS cutoff score of ≤4 appears to be the most appropriate for adults with epilepsy who have low average IQ and below. However, more studies will need to be conducted to establish these cutoff scores with a greater level of confidence, as our sample size for each individual IQ group was quite small. As such, these cutoff scores should be further explored in a larger epilepsy sample. Lastly, given that only one participant in our sample failed Trial 2 of the TOMM, our results should be replicated in a sample for which more participants fail Trial 2 of the TOMM. This will subsequently allow sensitivity figures for RDS cutoff scores to be established in adults with epilepsy, with the intended goal that an optimal cutoff score should demonstrate strong sensitivity and specificity. Related, it is quite surprising that only one participant (~2%) failed Trial 2 of the TOMM in this study, as this rate is substantially below Boone’s (2007) recommended figure of 10% (see Introduction). While it is possible that our sample just happened to be a highly-motivated sample, our findings may also suggest that individuals with epilepsy are more motivated to put forward optimal effort on cognitive testing, perhaps, given the repercussions of not doing so (e.g., pre-surgical workup). This latter understanding for our study’s low rate of failures on Trial 2 of the TOMM highlights the importance of considering the context of a neuropsychological evaluation when attempting to interpret PVT findings, as it has been suggested elsewhere (Nelson et al., 2010). In considering this, future research should determine if independent base rate of failures on PVTs for individuals with epilepsy is warranted. Finally, further research should determine if epilepsy contributes independently to poor performance on RDS. Preliminary findings by Drane and colleagues (2016) suggests that epilepsy factors, such as interictal epileptiform discharges, can impact performance on PVTs. Unfortunately, we were unable to analyze this question as part of this study. Despite these limitations, this study provides preliminary evidence that commonly used RDS cutoff scores of ≤6 and ≤7 may not be appropriate for assessing suboptimal effort in adults with epilepsy, especially those with low average IQ and below. Additionally, our findings highlight the importance of considering diagnosis, IQ, and cognitive factors, when interpreting performance on embedded PVTs, and that caution should be used when applying cutoff scores across different clinical populations. Funding This work was supported, in part, by Finding A Cure for Epilepsy and Seizures (FACES). Conflict of interest The authors declare that there are no competing interests to report. References Aldenkamp, A. P. ( 2006). Cognitive impairment in epilepsy: State of affairs and clinical relevance. Seizure , 15, 219– 220. Google Scholar CrossRef Search ADS   Aldenkamp, A., & Bodde, N. ( 2005). Behaviour, cognition and epilepsy. Acta Neurologica Scandinavica , 112, 19– 25. Google Scholar CrossRef Search ADS   Axelrod, B. N. ( 2002). Validity of the Wechsler abbreviated scale of intelligence and other very short forms of estimating intellectual functioning. Assessment , 9, 17– 23. Google Scholar CrossRef Search ADS   Axelrod, B. N., Fichtenberg, N. L., Millis, S. R., & Wertheimer, J. C. ( 2006). Detecting incomplete effort with digit span from the Wechsler Adult Intelligence Scale—Third Edition. The Clinical Neuropsychologist , 20, 513– 523. Google Scholar CrossRef Search ADS   Babikian, T., Boone, K. B., Lu, P., & Arnold, G. ( 2006). Sensitivity and specificity of various digit span scores in the detection of suspect effort. The Clinical Neuropsychologist , 20, 145– 159. Google Scholar CrossRef Search ADS   Baxendale, S., McGrath, K., & Thompson, P. J. ( 2014). Epilepsy & IQ: The clinical utility of the Wechsler Adult Intelligence Scale–Fourth ed (WAIS–IV) indices in the neuropsychological assessment of people with epilepsy. Journal of Clinical and Experimental Neuropsychology , 36, 137– 143. Google Scholar CrossRef Search ADS   Bianchini, K. J., Mathias, C. W., Greve, K. W., Houston, R. J., & Crouch, J. A. ( 2001). Classification accuracy of the Portland Digit Recognition Test in traumatic brain injury. The Clinical Neuropsychologist , 15, 461– 470. Google Scholar CrossRef Search ADS   Boone, K. B. ( 2007). Assessment of feigned cognitive impairment: A neuropsychological perspective . New York, London: Guilford Press. Bush, S. S., Ruff, R. M., Tröster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., et al.  . ( 2005). Symptom validity assessment: Practice issues and medical necessity: NAN Policy & Planning Committee. Archives of Clinical Neuropsychology , 20, 419– 426. Google Scholar CrossRef Search ADS   Constantinou, M., Bauer, L., Ashendorf, L., Fisher, J. M., & McCaffrey, R. J. ( 2005). Is poor performance on recognition memory effort measures indicative of generalized poor performance on neuropsychological tests? Archives of Clinical Neuropsychology , 20, 191– 198. Google Scholar CrossRef Search ADS   Cragar, D. E., Berry, D. T., Fakhoury, T. A., Cibula, J. E., & Schmitt, F. A. ( 2006). Performance of patients with epilepsy or psychogenic non-epileptic seizures on four measures of effort. The Clinical Neuropsychologist , 20, 552– 566. Google Scholar CrossRef Search ADS   Curtis, K. L., Greve, K. W., Bianchini, K. J., & Brennan, A. ( 2006). California Verbal Learning Test indicators of malingered neurocognitive dysfunction: Sensitivity and specificity in traumatic brain injury. Assessment , 13, 46– 61. Google Scholar CrossRef Search ADS   Dean, A. C., Victor, T. L., Boone, K. B., & Arnold, G. ( 2008). The relationship of IQ to effort test performance. The Clinical Neuropsychologist , 22, 705– 722. Google Scholar CrossRef Search ADS   Drane, D. L., Ojemann, J. G., Kim, M. S., Gross, R. E., Miller, J. W., Faught, R. E., et al.  . ( 2016). Interictal epileptiform discharge effects on neuropsychological assessment and epilepsy surgical planning. Epilepsy & Behavior , 56, 131– 138. Google Scholar CrossRef Search ADS   Duncan, A. ( 2005). The impact of cognitive and psychiatric impairment of psychotic disorders on the Test of Memory Malingering (TOMM). Assessment , 12, 123– 129. Google Scholar CrossRef Search ADS   Elger, C. E., Helmstaedter, C., & Kurthen, M. ( 2004). Chronic epilepsy and cognition. The Lancet Neurology , 3, 663– 672. Google Scholar CrossRef Search ADS   Fisher, R. S., Acevedo, C., Arzimanoglou, A., Bogacz, A., Cross, J. H., Elger, C. E., et al.  . ( 2014). ILAE official report: A practical clinical definition of epilepsy. Epilepsia , 55, 475– 482. Google Scholar CrossRef Search ADS   Green, P. ( 2011). Comparison between the Test of Memory Malingering (TOMM) and the Nonverbal Medical Symptom Validity Test (NV-MSVT) in adults with disability claims. Applied Neuropsychology , 18, 18– 26. Google Scholar CrossRef Search ADS   Greiffenstein, M. F., Baker, W. J., & Gola, T. ( 1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment , 6, 218. Google Scholar CrossRef Search ADS   Greve, K. W., & Bianchini, K. J. ( 2004). Setting empirical cut-offs on psychometric indicators of negative response bias: A methodological commentary with recommendations. Archives of Clinical Neuropsychology , 19, 533– 541. Google Scholar CrossRef Search ADS   Greve, K. W., Bianchini, K. J., & Doane, B. M. ( 2006). Classification accuracy of the Test of Memory Malingering in traumatic brain injury: Results of a known-groups analysis. Journal of Clinical and Experimental Neuropsychology , 28, 1176– 1190. Google Scholar CrossRef Search ADS   Greve, K. W., Springer, S., Bianchini, K. J., Black, F. W., Heinly, M. T., Love, J. M., et al.  . ( 2007). Malingering in toxic exposure classification accuracy of reliable digit span and WAIS-III Digit Span Scaled Scores. Assessment , 14, 12– 21. Google Scholar CrossRef Search ADS   Gunner, J. H., Miele, A. S., Lynch, J. K., & McCaffrey, R. J. ( 2012). Performance of non-neurological older adults on the Wisconsin Card Sorting Test and the Stroop Color–Word Test: Normal variability or cognitive impairment? Archives of Clinical Neuropsychology , 27, 398– 405. Google Scholar CrossRef Search ADS   Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., & Millis, S. R., Conference Participants 1. ( 2009). American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist , 23, 1093– 1129. Google Scholar CrossRef Search ADS   Heinly, M. T., Greve, K. W., Bianchini, K. J., Love, J. M., & Brennan, A. ( 2005). WAIS digit span-based indicators of malingered neurocognitive dysfunction classification accuracy in traumatic brain injury. Assessment , 12, 429– 444. Google Scholar CrossRef Search ADS   Hilsabeck, R. C., Gordon, S. N., Hietpas-Wilson, T., & Zartman, A. L. ( 2011). Use of Trial 1 of the Test of Memory Malingering (TOMM) as a screening measure of effort: suggested discontinuation rules. The Clinical Neuropsychologist , 25, 1228– 1238. Google Scholar CrossRef Search ADS   Hunt, T. N., Ferrara, M. S., Miller, L. S., & Macciocchi, S. ( 2007). The effect of effort on baseline neuropsychological test scores in high school football athletes. Archives of Clinical Neuropsychology , 22, 615– 621. Google Scholar CrossRef Search ADS   Iverson, G. L., & Tulsky, D. S. ( 2003). Detecting malingering on the WAIS-III: unusual Digit Span performance patterns in the normal population and in clinical groups. Archives of Clinical Neuropsychology , 18 ( 1), 1– 9. Google Scholar CrossRef Search ADS   Lange, R. T., Pancholi, S., Bhagwat, A., Anderson-Barnes, V., & French, L. M. ( 2012). Influence of poor effort on neuropsychological test performance in US military personnel following mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology , 34, 453– 466. Google Scholar CrossRef Search ADS   Larrabee, G. J. ( 2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist , 17, 410– 425. Google Scholar CrossRef Search ADS   Lee, G. P., Loring, D. W., & Martin, R. C. ( 1992). Rey’s 15-item visual memory test for the detection of malingering: Normative observations on patients with neurological disorders. Psychological Assessment , 4, 43. Google Scholar CrossRef Search ADS   Loring, D. W., Lee, G. P., & Meador, K. J. ( 2005). Victoria symptom validity test performance in non-litigating epilepsy surgery candidates. Journal of Clinical and Experimental Neuropsychology , 27, 610– 617. Google Scholar CrossRef Search ADS   Marshall, P., Schroeder, R., O’Brien, J., Fischer, R., Ries, A., Blesi, B., et al.  . ( 2010). Effectiveness of symptom validity measures in identifying cognitive and behavioral symptom exaggeration in adult attention deficit hyperactivity disorder. The Clinical Neuropsychologist , 24, 1204– 1237. Google Scholar CrossRef Search ADS   Mathias, C. W., Greve, K. W., Bianchini, K. J., Houston, R. J., & Crouch, J. A. ( 2002). Detecting malingered neurocognitive dysfunction using the reliable digit span in traumatic brain injury. Assessment , 9, 301– 308. Google Scholar CrossRef Search ADS   McCagh, J., Fisk, J. E., & Baker, G. A. ( 2009). Epilepsy, psychosocial and cognitive functioning. Epilepsy Research , 86 ( 1), 1– 14. Google Scholar CrossRef Search ADS   McKay, C., Casey, J. E., Wertheimer, J., & Fichtenberg, N. L. ( 2007). Reliability and validity of the RBANS in a traumatic brain injured sample. Archives of Clinical Neuropsychology , 22, 91– 98. Google Scholar CrossRef Search ADS   Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. ( 2002). Base rates of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology , 24, 1094– 1102. Google Scholar CrossRef Search ADS   Mojs, E., Gajewska, E., Głowacka, M. D., & Samborski, W. ( 2007). The prevalence of cognitive and emotional disturbances in epilepsy and its consequences for therapy. In Annales Academiae Medicae Stetinensis , 53, 82– 87. Nelson, N. W., Hoelzle, J. B., McGuire, K. A., Ferrier-Auerbach, A. G., Charlesworth, M. J., & Sponheim, S. R. ( 2010). Evaluation context impacts neuropsychological performance of OEF/OIF veterans with reported combat-related concussion. Archives of Clinical Neuropsychology , 25, 713– 723. Google Scholar CrossRef Search ADS   O'Bryant, S. E., Engel, L. R., Kleiner, J. S., Vasterling, J. J., & Black, F. W. ( 2007). Test of memory malingering (TOMM) trial 1 as a screening measure for insufficient effort. The Clinical Neuropsychologist , 21, 511– 521. Google Scholar CrossRef Search ADS   Randolph, C., Tierney, M. C., Mohr, E., & Chase, T. N. ( 1998). The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): Preliminary clinical validity. Journal of Clinical and Experimental Neuropsychology , 20, 310– 319. Google Scholar CrossRef Search ADS   Rees, L. M., Tombaugh, T. N., Gansler, D. A., & Moczynski, N. P. ( 1998). Five validation experiments of the Test of Memory Malingering (TOMM). Psychological Assessment , 10, 10– 19. Google Scholar CrossRef Search ADS   Salazar, X. F., Lu, P. H., Wen, J., & Boone, K. B. ( 2007). The use of effort tests in ethnic minorities and in non-English-speaking and English as a second language populations. In. K. B. Boone (Ed.), Assessment of feigned cognitive impairment: a neuropsychological Perspective  (pp. 405– 427). New York, NY: Guilford. Schroeder, R. W., & Marshall, P. S. ( 2011). Evaluation of the appropriateness of multiple symptom validity indices in psychotic and non-psychotic psychiatric populations. The Clinical Neuropsychologist , 25, 437– 453. Google Scholar CrossRef Search ADS   Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S. ( 2012). Reliable Digit Span: A systematic review and cross-validation study. Assessment , 19, 21– 30. Google Scholar CrossRef Search ADS   Schubert, R. ( 2005). Attention deficit disorder and epilepsy. Pediatric Neurology , 32 ( 1), 1– 10. Google Scholar CrossRef Search ADS   Sharland, M. J., & Gfeller, J. D. ( 2007). A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology , 22, 213– 223. Google Scholar CrossRef Search ADS   Slick, D. J., Sherman, E. M., & Iverson, G. L. ( 1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist , 13, 545– 561. Google Scholar CrossRef Search ADS   Slick, D. J., Tan, J. E., Strauss, E. H., & Hultsch, D. F. ( 2004). Detecting malingering: A survey of experts’ practices. Archives of Clinical Neuropsychology , 19, 465– 473. Google Scholar CrossRef Search ADS   Teichner, G., & Wagner, M. T. ( 2004). The Test of Memory Malingering (TOMM): Normative data from cognitively intact, cognitively impaired, and elderly patients with dementia. Archives of Clinical Neuropsychology , 19, 455– 464. Google Scholar CrossRef Search ADS   Tombaugh, T. N. ( 1996). Test of memory malingering: TOMM . North Tonawanda, NY: Multi-Health Systems. Tombaugh, T. N. ( 1997). The Test of Memory Malingering (TOMM): Normative data from cognitively intact and cognitively impaired individuals. Psychological Assessment , 9, 260. Google Scholar CrossRef Search ADS   Wechsler, D. ( 1997). Wechsler Adult Intelligence Scale. In Administration and scoring manual  ( 3rd ed.). San Antonio, TX: Psychological Corporation. Wechsler, D. ( 2008). Wechsler Adult Intelligence Scale  ( 4th ed.). San Antonio, TX: Pearson. Wechsler, D. ( 2011). WASI-II: Wechsler abbreviated scale of intelligence. In WASI  ( 2nd ed.). Bloomington, MN: Pearson. Weinborn, M., Orr, T., Woods, S. P., Conover, E., & Feix, J. ( 2003). A validation of the Test of Memory Malingering in a forensic psychiatric setting. Journal of Clinical and Experimental Neuropsychology , 25, 979– 990. Google Scholar CrossRef Search ADS   Welsh, A. J., Bender, H. A., Whitman, L. A., Vasserman, M., & MacAllister, W. S. ( 2012). Clinical utility of reliable digit span in assessing effort in children and adolescents with epilepsy. Archives of Clinical Neuropsychology , 27, 735– 741. Google Scholar CrossRef Search ADS   Wisdom, N. M., Brown, W. L., Chen, D. K., & Collins, R. L. ( 2012). The use of all three Test of Memory Malingering trials in establishing the level of effort. Archives of Clinical Neuropsychology , 27, 208– 212. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Clinical Neuropsychology Oxford University Press

Reliable Digit Span: Does it Adequately Measure Suboptimal Effort in an Adult Epilepsy Population?

Loading next page...
 
/lp/ou_press/reliable-digit-span-does-it-adequately-measure-suboptimal-effort-in-an-AiHt8GUirR
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ISSN
0887-6177
eISSN
1873-5843
D.O.I.
10.1093/arclin/acy027
Publisher site
See Article on Publisher Site

Abstract

Abstract Objective Assessment of performance validity is a necessary component of any neuropsychological evaluation. Prior research has shown that cutoff scores of ≤6 or ≤7 on Reliable Digit Span (RDS) can detect suboptimal effort across numerous adult clinical populations; however, these scores have not been validated for that purpose in an adult epilepsy population. This investigation aims to determine whether these previously established RDS cutoff scores could detect suboptimal effort in adults with epilepsy. Method Sixty-three clinically referred adults with a diagnosis of epilepsy or suspected seizures were administered the Digit Span subtest of the Wechsler Adult Intelligence Scale (WAIS-III or WAIS-IV). Most participants (98%) passed Trial 2 of the Test of Memory Malingering (TOMM), achieving a score of ≥45. Results Previously established cutoff scores of ≤6 and ≤7 on RDS yielded a specificity rate of 85% and 77% respectively. Findings also revealed that RDS scores were positively related to attention and intellectual functioning. Given the less than ideal specificity rate associated with each of these cutoff scores, together with their strong association to cognitive factors, secondary analyses were conducted to identify more optimal cutoff scores. Preliminary results suggest that an RDS cutoff score of ≤4 may be more appropriate in a clinically referred adult epilepsy population with a low average IQ or lower. Conclusions Preliminary findings indicate that cutoff scores of ≤6 and ≤7 on RDS are not appropriate in adults with epilepsy, especially in individuals with low average IQ or below. Performance validity test, Suboptimal effort, Epilepsy, Reliable digit span, Test of Memory Malingering Introduction Performance validity testing is an integral part of any neuropsychological evaluation, particularly in a forensic setting or when there is a question of secondary gain (Axelrod, Fichtenberg, Millis, & Wertheimer, 2006; Lee, Loring, & Martin, 1992). Still, suboptimal effort can occur for a multitude of other reasons as well, including psychiatric conditions, inattentiveness, poor motivation, and fatigue (Hunt, Ferrara, Miller, & Macciocchi, 2007; Lange, Pancholi, Bhagwat, Anderson-Barnes, & French, 2012). Accordingly, the National Academy of Neuropsychology (Bush et al., 2005) and the American Academy of Clinical Neuropsychology (Heilbronner et al., 2009) have issued strong statements urging neuropsychologists to incorporate performance validity tests (PVTs) in every evaluation, regardless of the referral question (Schroeder, Twumasi-Ankrah, Baade, & Marshall, 2012; Sharland & Gfeller, 2007), as these instruments provide insight into whether an examinee’s scores are likely reflective of their actual level of cognitive functioning or the result of them not putting forth maximum effort (Constantinou, Bauer, Ashendorf, Fisher, & McCaffrey, 2005). PVTs are classified as either stand-alone or embedded measures (Gunner, Miele, Lynch, & McCaffrey, 2012; Hilsabeck, Gordon, Hietpas-Wilson, & Zartman, 2011). While stand-alone PVTs are designed to specifically assess effort, embedded PVTs are derived secondary to the instrument’s own clinical utility (Heilbronner et al., 2009). Particular strengths of embedded PVTs are that they provide clinicians with multiple opportunities to monitor engagement without having to lengthen testing (Curtis, Greve, Bianchini, & Brennan, 2006; Gunner et al., 2012; Mathias, Greve, Bianchini, Houston, & Crouch, 2002). Conversely, a drawback of embedded PVTs is that their cutoff scores are derived from instruments that also measure cognitive functioning. Thus, poor performance on these measures can also occur as a result of genuine cognitive impairment (Dean, Victor, Boone, & Arnold, 2008; Slick, Sherman, & Iverson, 1999). Accordingly, practitioners need to select embedded PVTs, as well as cutoff scores, in which they can be highly confident that poor performance is not linked to genuine cognitive impairment, such as low intellectual functioning (Axelrod et al., 2006; Bianchini, Mathias, Greve, Houston, & Crouch, 2001). Prior work has shown that IQ and performance on PVTs (e.g., Rey 15-Item, Test of Memory Malingering, and Word Memory Test) are significantly related, such that lower IQ adversely impacts performance on PVTs (see Dean et al., 2008 for a review). Ideally, PVTs should demonstrate strong sensitivity and specificity (Greve & Bianchini, 2004). However, given that these variables are often inversely related, this is not always possible. In such situations, practitioners are encouraged to prioritize specificity, with extreme measures being taken to limit the number of false positives (Greve & Bianchini, 2004). Additionally, given that base rates for PVT failures differ across clinical groups (Hilsabeck et al., 2011; Mittenberg, Patton, Canyock, & Condit, 2002), cutoff scores must be independently validated in a given population, with the general rule of thumb being that 10% of any given population should be identified as putting forth suboptimal effort (Boone, 2007; Dean et al., 2008). The Digit Span subtest has received a lot of attention in the PVT literature (Boone, 2007). Prior research demonstrates that various components of the Digit Span subtest (Babikian, Boone, Lu, & Arnold, 2006; Heinly, Greve, Bianchini, Love, & Brennan, 2005), including reliable digit span (RDS; Babikian et al., 2006; Larrabee, 2003; Schroeder et al., 2012), can detect suboptimal effort in many adult clinical populations, such as those with toxic exposure (Greve et al., 2007), attention-deficit/hyperactivity disorder (Marshall et al., 2010), traumatic brain injury (Heinly et al., 2005; Mathias et al., 2002), and psychotic disorders (Schroeder & Marshall, 2011). Indeed, RDS is among the most commonly used embedded PVTs in neuropsychology (Boone, 2007) and is calculated by adding the longest strings of digits forward and digits backward in which both trials were passed (Greiffenstein, Baker, & Gola, 1994). To date, few studies have investigated whether the Digit Span subtest is a valid PVT in adults with epilepsy (Iverson & Tulsky, 2003; Loring, Lee, & Meador, 2005). Additionally, no study has examined the RDS component of the Digit Span subtest within this population. Historically, RDS cutoff scores of ≤6 and ≤7 have been used to detect suboptimal effort across many adult populations because they are typically associated with a 90% pass rate (Schroeder et al., 2012). Nevertheless, some have cautioned against the use of such high cutoff scores, as they generate high false positive rates (i.e., less than a 90% pass rate) in several adult clinical populations, including those with intellectual disability (ID), cerebrovascular accidents, severe memory disorders, and non-English native speakers (Dean et al., 2008; Heinly et al., 2005; Salazar, Lu, Wen, Boone, & Boone, 2007). As such, these researchers have suggested that a lower cutoff score be used within these clinical populations (Schroeder et al., 2012). Still, these cutoff scores (≤6, ≤7) have not been validated in adults with epilepsy. Welsh, Bender, Whitman, Vasserman, and MacAllister (2012) demonstrate that it is not appropriate to readily apply cutoff scores that have been validated in other clinical populations to individuals with epilepsy (Welsh et al., 2012). In their study, which examined RDS in a pediatric epilepsy sample, an established RDS cutoff score of ≤6 yielded a failure rate of 65%, far below the recommended criteria, which stipulates that a PVT should yield a 90% pass rate (Boone, 2007; Dean et al., 2008). Further, when comparing rates of failures on the Test of Memory Malingering (TOMM; Tombaugh, 1996) and RDS, this cutoff score generated a 71% specificity value. Lastly, they showed that RDS scores were positively correlated with IQ, suggesting that RDS is not a completely independent measure of effort. Welsh and colleagues (2012) provide preliminarily evidence that a lower cutoff score than ≤6 should be used in an adult epilepsy population, as these individuals often exhibit cognitive deficits. Specifically, it has been reported that up to a third of adults with epilepsy report cognitive complaints (Mojs, Gajewska, Głowacka, & Samborski, 2007). While a wide range of cognitive factors has been found to be impaired within this population, the most frequently reported deficits are memory, processing speed, and attention (Aldenkamp, 2006; Schubert, 2005). Additionally, general intellectual functioning, as measured by IQ, has been found to be compromised within this clinical population (Baxendale, McGrath, & Thompson, 2014). Lastly, seizure factors have been shown to impact the pattern and extent of cognitive impairments, including seizure etiology, seizure type, seizure frequency, and age of seizure onset (Elger, Helmstaedter, & Kurthen, 2004). The current study has several goals. The primary purpose is to evaluate the specificity of RDS using the most cited RDS cutoff scores, ≤6 and ≤7, and compare failure rates on RDS with those of Trial 2 of the TOMM (Tombaugh, 1996) in an adult epilepsy population. For this comparison, a cutoff score of <45 on Trial 2 of the TOMM will be used, as recommended by O'Bryant, Engel, Kleiner, Vasterling, and Black (2007) and Hilsabeck and colleagues (2011). As mentioned, the general rule of thumb is that 10% of any given population should be identified as putting forth suboptimal effort unless established base rates exist for PVT failures in a particular population. Another goal of this study is to examine the relationship between RDS and overall intellectual functioning, as prior work (Welsh et al., 2012) has shown that RDS is related to IQ in children with epilepsy. A related goal is to examine the relationship between attention and RDS. The reason being that RDS cutoff scores are derived from the Digit Span subtest of the Wechsler instruments, which is a measure of attention and working memory. If such a relationship emerges, it would further suggest that RDS is not an independent measure of effort, rather, it is also linked to deficits in attention and working memory. Finally, given that it was expected that RDS scores would be significantly related to IQ, and that a cutoff score of ≤6 and ≤7 would yield a lower than acceptable pass rate for our sample (i.e., less than 90% would pass the RDS), an exploratory goal of this study is to identify a more optimal cutoff score for individuals with epilepsy that have a low IQ. To accomplish this latter goal, we will again compare failure rates on RDS with those of Trial 2 of the TOMM (Tombaugh, 1996), using a cutoff score of <45. However, given that Trial 2 of the TOMM has been shown to be slightly less sensitive than other measures (Green, 2011), it is not expected that failure rates on RDS will be identical to those of Trial 2 of the TOMM. Instead, we will use the general rule of thumb that 10% of a clinical population should fail a given PVT (Boone, 2007), together with our sample’s “actual” failure rates on Trial 2 of the TOMM, to inform our decision as to what is a more optimal cutoff score. Methods Participants This retrospective study consisted of 63 clinically referred adults with a diagnosis of epilepsy or suspected seizures who underwent neuropsychological testing at the NYU Comprehensive Epilepsy Center as part of their routine clinical care, conducted on either an inpatient or outpatient basis. As part of this comprehensive neuropsychological evaluation, all participants completed the TOMM, the Digit Span Subtest of one of the Wechsler instruments, the Coding and List Learning subtests of the Repeatable Battery for the Assessment of the Neuropsychological Status (RBANS; Randolph, Tierney, Mohr, & Chase, 1998), and a Wechsler Scale of estimated IQ. The primary inclusion criteria for this study included a confirmed diagnosis of epilepsy or suspected seizures. A diagnosis of epilepsy or suspected seizures was rendered by a neurologist/epileptologist via neurological workup that mostly included video electroencephalography (98%) and a clinical interview, as suggested by the International League Against Epilepsy (Fisher et al., 2014). In only one instance, was an epilepsy diagnosis made via a routine EEG. When possible, EEG established the nature of participants’ seizures (i.e., focal, generalized, mixed), as well as the lateralization and localization of seizure onset. Additionally, magnetic resonance imaging (MRI) findings were available for most of our sample. Epilepsy severity variables, including age of seizure onset and number of antiepileptic drugs (AEDs) were also recorded. Lastly, English had to be participants’ primary language. Procedures All participants were administered the Digit Span subtest of the Wechsler Adult Intelligence Scale (WAIS), either the third edition (Wechsler, 1997) or the fourth edition (Wechsler, 2008), TOMM (Tombaugh, 1996), and a Wechsler measure of intelligence (see below for a full breakdown of Wechsler measures) for an estimate of full-scale IQ, as part of a comprehensive neuropsychological assessment. Additionally, to assess attention, participants were administered the Coding and List Learning subtests of the RBANS (Randolph et al., 1998). In instances that participants obtained a score of ≥45 on Trial 1 of the TOMM, Trial 2 of the TOMM was not given, as recommended by O'Bryant and colleagues (2007) and Hilsabeck and colleagues (2011), and their Trial 1 score was counted for Trial 2. All administration was standardized according to each instrument’s manual. Measures Reliable Digit Span The primary dependent measure in this study is RDS. As mentioned, RDS is an embedded measure that is obtained from the Digit Span subtest and is calculated by summing the maximum number of digits forward with the maximum number of digits reversed when both trials have been passed (Greiffenstein et al., 1994). Prior research (seeIntroduction) has shown that using a cutoff score of ≤6 and ≤7 to detect suboptimal effort is appropriate for many adult clinical populations, although some have cautioned against the use of such high cutoff scores in those individuals who have ID, cerebrovascular accidents, severe memory disorders, and non-English native speakers (Greve et al., 2007; Heinly et al., 2005; Marshall et al., 2010; Mathias et al., 2002; Schroeder & Marshall, 2011; Schroeder et al., 2012). Lastly, although the factor structure differs for backwards digit span on the WAIS-III and WAIS-IV, research has shown that RDS cutoff scores of ≤6 and ≤7 are appropriate for both versions of the WAIS (Schroeder et al., 2012). Test of Memory Malingering The TOMM (Tombaugh, 1996) is one of the most popular and validated PVTs in the field of neuropsychology (Sharland & Gfeller, 2007; Slick, Tan, Strauss, & Hultsch, 2004). A cutoff score of <45 on Trial 1 of the TOMM has routinely demonstrated high rates of sensitivity and specificity across the most cognitively impaired clinical populations, providing strong evidence that it measures performance validity rather than cognition (Duncan, 2005; Greve, Bianchini, & Doane, 2006; Rees, Tombaugh, Gansler, & Moczynski, 1998; Teichner & Wagner, 2004; Tombaugh, 1997; Weinborn, Orr, Woods, Conover, & Feix, 2003). The TOMM is a 50-item visual recognition task, which consists of two learning trials and an optional retention trial. Participants are first presented with pictures of everyday objects for 3 s, after which they are asked to choose between a foil image and the correct image. Prior research has demonstrated that both trials of the TOMM can identify poor effort in an adult epilepsy population (Cragar, Berry, Fakhoury, Cibula, & Schmitt, 2006; Wisdom, Brown, Chen, & Collins, 2012), although Trial 2 of the TOMM has been proven to be a more clinically valid PVT (see previous citations). Intellectual Assessment The third goal of this study is to examine the relations between RDS scores and intellectual functioning, and, thus, full-scale IQ (FSIQ) was measured using several different variations of the Wechsler intelligence tests. The majority of participants (91%) received the four-subtest version of the Wechsler Abbreviated Scale of Intelligence 2nd version (WASI-2; Wechsler, 2011), with the remaining percentage completing either the Wechsler Adult Intelligence Scale 3rd version (WAIS-III; Wechsler, 1997) or Wechsler Adult Intelligence Scale 4th Version (WAIS-IV; Wechsler, 2008). The WAIS-IV manual (Wechsler, 2008) outlines that the WAIS-IV FSIQ is highly correlated (.92) with the WASI-II four-subtest FSIQ. Additionally, prior research has demonstrated that these Wechsler instruments are highly correlated (Axelrod, 2002), and, thus, it was deemed appropriate to collapse full-scale IQ scores across these different instruments into a single column. IQ was examined as both a numerical and categorical variable. When analyzed as a categorical variable, the following categories were used to classify IQ: (A) average IQ and above: Standard Score ≥90 and (B) low average IQ and below: Standard Score ≤89. Repeatable Battery for the Assessment of Neuropsychological Status The RBANS (Randolph et al., 1998) is a brief cognitive screener that is commonly used in adult neuropsychological evaluations for the purposes of assessing a wide range of cognitive domains, including immediate memory, visuospatial/constructional abilities, language, attention, and delayed memory. Performance for each of these indices can be broken down into a number of subscales. In considering the aims of this study, the List Learning and Coding subtests of the RBANS were utilized, as prior research has shown that these subtests correlate significantly with measures of attention (McKay, Casey, Wertheimer, & Fichtenberg, 2007). Raw scores were recorded for each of these scales. Finally, the Digit Span subtest of the RBANS was not examined in this study, given its almost identical relationship to the Digit Span subtest of the Wechsler instruments. Statistical Plan Descriptive statistics of demographic variables (e.g. age, race, and ethnicity), seizure- variables (e.g. age of seizure onset, type of epilepsy) and IQ was performed. Basic descriptive statistics were also gathered for performance on Trial 1 of the TOMM, Trial 2 of the TOMM, and RDS. Then, using previously established cutoff scores of ≤6 and ≤7 for RDS and <45 for Trial 2 of the TOMM, failure rates for both RDS and TOMM were established. Subsequently, specificity rates were calculated for each of these cutoff scores on RDS using failure rates on Trial 2 of the TOMM as the pass/fail criteria. Given that it was expected that RDS scores would be significantly related to IQ, specificity rates were calculated for the entirety of the sample, as well as for each IQ group (i.e. average IQ and above; low average IQ and below). Also anticipated was the fact that the above cutoff scores would generate a less than acceptable pass rate for our sample (i.e. 90%; Boone, 2007; Dean et al., 2008), especially for participants with low average IQ and below; thus, we proceeded to identify a more optimal RDS cutoff score for each IQ group as a secondary aim of this study. Specifically, we aimed to identify a cutoff score that would result in a pass rate that would be like that of Trial 2 of the TOMM, which was expected to be around the recommended 90%. Prior to conducting failure rates for each IQ group, Pearson correlations were carried out to examine the extent to which RDS and TOMM scores were related to IQ and attention within our sample. When the assumption of normality was violated, Spearman correlation coefficients were used instead. This provided the basis for examining failure rates on RDS across the different IQ groups. Results This retrospective study consisted of 63 clinically referred adults with a diagnosis of epilepsy or suspected seizures. Participants ranged in age from 18 to 75 (M = 39.90 years, SD = 16.13 years), and the estimated IQ for the entirety of our sample was in the average range (M = 94.60; SD = 16.91; range = 63–137). Related, 40 participants had average IQ and above, and 23 participants had low average IQ and below. Most participants were females (55%) and right-handed (86%). Consistent with the demographics serviced at our center, the majority of participants were Caucasian (78%), with the remaining being African-American (6%), Asian/Pacific Islander (5%), Latino (6%), and other/unknown (5%). Regarding seizure type, most participants presented with focal epilepsy (70%), with slightly more left-lateralized profiles (43%) than right-lateralized profiles (37%). On average, seizure onset occurred in middle adulthood (M = 27.00 years, SD = 18.73 years). A total of 33 (52%) participants had documented MRI abnormalities. The remaining had either normal neuroimaging or neuroimaging data was unavailable. See Table 1 for a full breakdown of our sample’s characteristics. Table 1. Patient characteristics (N= 63) Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Note: AEDs = antiepileptic drugs. Table 1. Patient characteristics (N= 63) Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Variable  Mean (SD); Range or N (%)  Sex  Male = 35 (56%) Female = 28 (44%)  Race/Ethnicity  Caucasian = 49 (78%) Latino = 4 (6%) Asian/Pacific Islander = 3 (5%) African American = 4 (6%) Other = 2 (3%) Unclassified = 1 (2%)  Handedness  Right = 34 (81%) Left = 6 (14%) Mixed = 1 (2%) Unclassified = 1 (2%)  FSIQ  94.90 (16.91); 63–137  IQ Classification Groups  Average IQ and Above = 40 (64%) Low Average IQ and Below = 23 (36%)  Age (years)  41.55 (17.34); 18–75  Seizure onset (years)  27.00 (18.73); 1–70  Number of AEDs  No Medications = 10 (16%) One Medications = 23 (37%) Two Medications = 16 (25%) Three Medications = 13 (20%) Unknown = 1 (2%)  Seizure type  Focal Epilepsy Syndromes = 44 (70%) Generalized Epilepsy Syndromes = 12 (19%) Unclassified = 7 (11%)  EEG lateralization  Left = 19 (43%) Right = 16 (37%) Bilateral = 9 (20%)  MRI findings  Normal MRI = 13 (21%) Lesional = 17 (27%) Atrophy = 6 (10%) Other = 10 (15%) No Imaging Available = 17 (27%)  Note: AEDs = antiepileptic drugs. For our sample, mean scores on Trial 1 of the TOMM were 45.98 (SD = 4.31; range = 32–50) and mean scores on Trial 2 of the TOMM were 48.16 (SD = 1.85; range: 44–50). Additionally, 65% of our sample passed Trial 1 of the TOMM. Unique to our sample is the fact that with the exception of one participant every individual obtained a score equal to or better than a 45 (i.e. the recommended cutoff score) for Trial 2 of the TOMM. That is, only one participant within our sample failed Trial 2 of the TOMM. Correlational analyses revealed that scores on Trial 1 and Trial 2 of the TOMM were not significantly correlated with age and IQ. Spearman correlations were used for Trial 1 and Trial 2 of the TOMM, as these variables violated the assumption of normality. With regard to RDS scores, performance ranged from 4 to 13, with the average for our sample being 8.62 (SD = 1.98). Previously established cutoff scores of ≤6 and ≤7 yielded a lower than acceptable specificity rate (≤6 = 85%; ≤7 = 77%), which corresponded to a false positive rate of 14–22% respectively. Nonetheless, a cutoff score of ≤5 classified a more acceptable false positive rate (≤5 = 10%), which was equivalent to a specificity rate of 90%. Furthermore, RDS was not significantly correlated with TOMM Trial 1 scores (r = −.10, p = .43) or TOMM Trial 2 scores (r = −.09, p = .51; see Table 3). Conversely, RDS scores were significantly positively correlated with IQ (r = .44, p < .01), List Learning (r = .41, p = .01), and Coding (r = .56, p < .01), providing evidence that RDS is not an independent measure of effort, rather, it is also linked to deficits in attention and intellectual functioning (see Table 2). Accordingly, specificity rates were examined independently for each IQ group as a secondary aim of this study. After further review, it was revealed that there were substantial differences in the failure and false positive rates for the different IQ groups. Specifically, a cutoff score of ≤7 yielded an unacceptably high false positive rate (13%) for adults with epilepsy that had average IQ and above. Conversely, a cutoff score of ≤6 produced a more acceptable false positive rate (8%) and specificity rate (92%) for this IQ group. For individuals with low average IQ and below, even a cutoff score of ≤5 was inappropriate, as it resulted in a high false positive rate (22%). Instead, a cutoff score of ≤4 yielded more appropriate results, as this cutoff score resulted in fewer false positive rate (4%) and higher specificity (96%). See Table 3 for the corresponding false positive rates and specificity rates by IQ group. Table 2. Spearman correlations between RDS, TOMM Trial 1, TOMM Trial 2, IQ, RBANS Coding, and RBANS List Learning   TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01    TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01  Notes: TOMM = Test of Memory Malingering; RBANS = Repeatable Battery for the Assessment of Neuropsychological Status; RDS = Reliable Digit Span; aSpearman Rho coefficients; *Correlation significant at p< .01 levels. Table 2. Spearman correlations between RDS, TOMM Trial 1, TOMM Trial 2, IQ, RBANS Coding, and RBANS List Learning   TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01    TOMM Trial 1^  TOMM Trial 2a  IQa  RBANS Codinga  RBANS List Learninga  RDS  r-value  −0.10  −0.09  0.44*  0.42*  0.41*  p-value  0.43  0.51  <0.01  <0.01  <0.01  Notes: TOMM = Test of Memory Malingering; RBANS = Repeatable Battery for the Assessment of Neuropsychological Status; RDS = Reliable Digit Span; aSpearman Rho coefficients; *Correlation significant at p< .01 levels. Table 3. Specificity and false positive rates for various RDS cutoffs in comparison with TOMM by IQ Group RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  Notes: RDS = Reliable Digit Span; TOMM = Test of Memory Malingering. Table 3. Specificity and false positive rates for various RDS cutoffs in comparison with TOMM by IQ Group RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  RDS  Average IQ and above  Low average IQ and below  False positive (%)  Specificity (%)  False positive (%)  Specificity (%)  ≤4  3  97  4  96  ≤5  3  97  22  78  ≤6  8  92  26  73  ≤7  13  87  39  61  Notes: RDS = Reliable Digit Span; TOMM = Test of Memory Malingering. Discussion RDS is a widely used embedded PVT in adult clinical and forensic populations (Boone, 2007). Prior work has shown that RDS cutoff scores of ≤6 and ≤7 can detect suboptimal effort across numerous adult clinical populations (Schroeder et al., 2012). However, some have cautioned against the use of such high cutoff scores in adults with ID, cerebrovascular accidents, severe memory disorders, and non-English native speakers (Dean et al., 2008; Heinly et al., 2005; Salazar et al., 2007). Still, to date, no study has validated these previously established cutoff scores (≤6, ≤7) in adults with epilepsy. This was the primary purpose of this investigation. Using previously established RDS cutoff scores of ≤6 and ≤7 yielded unacceptably low specificity rates (85%; 77%) for our sample of adults with epilepsy. Findings also revealed that RDS scores were significantly positively correlated with measures of IQ, as well as with those related to attention. These findings are to be expected, namely that previously established cutoff scores would generate low specificity rates in adults with epilepsy, given that this clinical population has known cognitive deficits and RDS has been shown to be related to intellectual functioning (Aldenkamp & Bodde, 2005; McCagh, Fisk, & Baker, 2009). This study’s results are also consistent with those described by Welsh and colleagues (2012), which reported similar findings in a pediatric epilepsy sample. Given that it was expected that cutoff scores of ≤6 and ≤7 would yield a lower than acceptable failure rate, a secondary and exploratory goal of this study was to establish more optimal cutoff scores for adults with epilepsy. Given that RDS has been shown to be related to IQ, it was determined that different cutoff scores would need to be established for different IQ groups within our sample. Preliminary findings suggest that whereas a RDS cutoff score of ≤6 is most appropriate for individuals with epilepsy who have average IQ and above (specificity = 92%), this cutoff score is not appropriate for adults with epilepsy who have low average IQ and below (specificity = 73%). Instead, a RDS cutoff score of ≤4 appears to be more appropriate (specificity = 96%). It should be mentioned that while normally a failure rate of significantly less than 10% for a PVT would suggest that a recommended cutoff score is not sensitive to suboptimal effort, this does not appear to be the case for our sample, given that only one of our participants failed Trial 2 of the TOMM. Thus, it was expected that an optimal RDS cutoff score would classify significantly less than the “expected” 10%. This study provides preliminary evidence that clinicians must be cautious when employing previously established RDS cutoff scores of ≤6 and ≤7 for adults with epilepsy, particularly for those with low average IQ and below. Additionally, our results provide additional support that PVT cutoff scores cannot be universally applied, and, that cutoff scores must be independently validated for each clinical population (Welsh et al., 2012). This study does not come without limitations. First, the sample size used in this study was relatively small. Related, we were unable to consider potential gender differences, as well as the potential influence of co-morbid disorders. This latter limitation is particularly important as epilepsy rarely occurs in isolation, and other conditions, such as psychiatric disorders, are known to impact performance on PVTs. As such, it is possible that specificity rates related to RDS cutoff scores may vary depending on the presence of co-morbid disorders. Notwithstanding with limitation, it should be mentioned that 98% of our sample passed Trial 2 of the TOMM, suggesting that the presence of co-morbid disorders did not influence RDS performance within our sample. It has also been stated that a RDS cutoff score of ≤6 appears to be the most appropriate for adults with epilepsy who have average IQ and above, and a RDS cutoff score of ≤4 appears to be the most appropriate for adults with epilepsy who have low average IQ and below. However, more studies will need to be conducted to establish these cutoff scores with a greater level of confidence, as our sample size for each individual IQ group was quite small. As such, these cutoff scores should be further explored in a larger epilepsy sample. Lastly, given that only one participant in our sample failed Trial 2 of the TOMM, our results should be replicated in a sample for which more participants fail Trial 2 of the TOMM. This will subsequently allow sensitivity figures for RDS cutoff scores to be established in adults with epilepsy, with the intended goal that an optimal cutoff score should demonstrate strong sensitivity and specificity. Related, it is quite surprising that only one participant (~2%) failed Trial 2 of the TOMM in this study, as this rate is substantially below Boone’s (2007) recommended figure of 10% (see Introduction). While it is possible that our sample just happened to be a highly-motivated sample, our findings may also suggest that individuals with epilepsy are more motivated to put forward optimal effort on cognitive testing, perhaps, given the repercussions of not doing so (e.g., pre-surgical workup). This latter understanding for our study’s low rate of failures on Trial 2 of the TOMM highlights the importance of considering the context of a neuropsychological evaluation when attempting to interpret PVT findings, as it has been suggested elsewhere (Nelson et al., 2010). In considering this, future research should determine if independent base rate of failures on PVTs for individuals with epilepsy is warranted. Finally, further research should determine if epilepsy contributes independently to poor performance on RDS. Preliminary findings by Drane and colleagues (2016) suggests that epilepsy factors, such as interictal epileptiform discharges, can impact performance on PVTs. Unfortunately, we were unable to analyze this question as part of this study. Despite these limitations, this study provides preliminary evidence that commonly used RDS cutoff scores of ≤6 and ≤7 may not be appropriate for assessing suboptimal effort in adults with epilepsy, especially those with low average IQ and below. Additionally, our findings highlight the importance of considering diagnosis, IQ, and cognitive factors, when interpreting performance on embedded PVTs, and that caution should be used when applying cutoff scores across different clinical populations. Funding This work was supported, in part, by Finding A Cure for Epilepsy and Seizures (FACES). Conflict of interest The authors declare that there are no competing interests to report. References Aldenkamp, A. P. ( 2006). Cognitive impairment in epilepsy: State of affairs and clinical relevance. Seizure , 15, 219– 220. Google Scholar CrossRef Search ADS   Aldenkamp, A., & Bodde, N. ( 2005). Behaviour, cognition and epilepsy. Acta Neurologica Scandinavica , 112, 19– 25. Google Scholar CrossRef Search ADS   Axelrod, B. N. ( 2002). Validity of the Wechsler abbreviated scale of intelligence and other very short forms of estimating intellectual functioning. Assessment , 9, 17– 23. Google Scholar CrossRef Search ADS   Axelrod, B. N., Fichtenberg, N. L., Millis, S. R., & Wertheimer, J. C. ( 2006). Detecting incomplete effort with digit span from the Wechsler Adult Intelligence Scale—Third Edition. The Clinical Neuropsychologist , 20, 513– 523. Google Scholar CrossRef Search ADS   Babikian, T., Boone, K. B., Lu, P., & Arnold, G. ( 2006). Sensitivity and specificity of various digit span scores in the detection of suspect effort. The Clinical Neuropsychologist , 20, 145– 159. Google Scholar CrossRef Search ADS   Baxendale, S., McGrath, K., & Thompson, P. J. ( 2014). Epilepsy & IQ: The clinical utility of the Wechsler Adult Intelligence Scale–Fourth ed (WAIS–IV) indices in the neuropsychological assessment of people with epilepsy. Journal of Clinical and Experimental Neuropsychology , 36, 137– 143. Google Scholar CrossRef Search ADS   Bianchini, K. J., Mathias, C. W., Greve, K. W., Houston, R. J., & Crouch, J. A. ( 2001). Classification accuracy of the Portland Digit Recognition Test in traumatic brain injury. The Clinical Neuropsychologist , 15, 461– 470. Google Scholar CrossRef Search ADS   Boone, K. B. ( 2007). Assessment of feigned cognitive impairment: A neuropsychological perspective . New York, London: Guilford Press. Bush, S. S., Ruff, R. M., Tröster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., et al.  . ( 2005). Symptom validity assessment: Practice issues and medical necessity: NAN Policy & Planning Committee. Archives of Clinical Neuropsychology , 20, 419– 426. Google Scholar CrossRef Search ADS   Constantinou, M., Bauer, L., Ashendorf, L., Fisher, J. M., & McCaffrey, R. J. ( 2005). Is poor performance on recognition memory effort measures indicative of generalized poor performance on neuropsychological tests? Archives of Clinical Neuropsychology , 20, 191– 198. Google Scholar CrossRef Search ADS   Cragar, D. E., Berry, D. T., Fakhoury, T. A., Cibula, J. E., & Schmitt, F. A. ( 2006). Performance of patients with epilepsy or psychogenic non-epileptic seizures on four measures of effort. The Clinical Neuropsychologist , 20, 552– 566. Google Scholar CrossRef Search ADS   Curtis, K. L., Greve, K. W., Bianchini, K. J., & Brennan, A. ( 2006). California Verbal Learning Test indicators of malingered neurocognitive dysfunction: Sensitivity and specificity in traumatic brain injury. Assessment , 13, 46– 61. Google Scholar CrossRef Search ADS   Dean, A. C., Victor, T. L., Boone, K. B., & Arnold, G. ( 2008). The relationship of IQ to effort test performance. The Clinical Neuropsychologist , 22, 705– 722. Google Scholar CrossRef Search ADS   Drane, D. L., Ojemann, J. G., Kim, M. S., Gross, R. E., Miller, J. W., Faught, R. E., et al.  . ( 2016). Interictal epileptiform discharge effects on neuropsychological assessment and epilepsy surgical planning. Epilepsy & Behavior , 56, 131– 138. Google Scholar CrossRef Search ADS   Duncan, A. ( 2005). The impact of cognitive and psychiatric impairment of psychotic disorders on the Test of Memory Malingering (TOMM). Assessment , 12, 123– 129. Google Scholar CrossRef Search ADS   Elger, C. E., Helmstaedter, C., & Kurthen, M. ( 2004). Chronic epilepsy and cognition. The Lancet Neurology , 3, 663– 672. Google Scholar CrossRef Search ADS   Fisher, R. S., Acevedo, C., Arzimanoglou, A., Bogacz, A., Cross, J. H., Elger, C. E., et al.  . ( 2014). ILAE official report: A practical clinical definition of epilepsy. Epilepsia , 55, 475– 482. Google Scholar CrossRef Search ADS   Green, P. ( 2011). Comparison between the Test of Memory Malingering (TOMM) and the Nonverbal Medical Symptom Validity Test (NV-MSVT) in adults with disability claims. Applied Neuropsychology , 18, 18– 26. Google Scholar CrossRef Search ADS   Greiffenstein, M. F., Baker, W. J., & Gola, T. ( 1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment , 6, 218. Google Scholar CrossRef Search ADS   Greve, K. W., & Bianchini, K. J. ( 2004). Setting empirical cut-offs on psychometric indicators of negative response bias: A methodological commentary with recommendations. Archives of Clinical Neuropsychology , 19, 533– 541. Google Scholar CrossRef Search ADS   Greve, K. W., Bianchini, K. J., & Doane, B. M. ( 2006). Classification accuracy of the Test of Memory Malingering in traumatic brain injury: Results of a known-groups analysis. Journal of Clinical and Experimental Neuropsychology , 28, 1176– 1190. Google Scholar CrossRef Search ADS   Greve, K. W., Springer, S., Bianchini, K. J., Black, F. W., Heinly, M. T., Love, J. M., et al.  . ( 2007). Malingering in toxic exposure classification accuracy of reliable digit span and WAIS-III Digit Span Scaled Scores. Assessment , 14, 12– 21. Google Scholar CrossRef Search ADS   Gunner, J. H., Miele, A. S., Lynch, J. K., & McCaffrey, R. J. ( 2012). Performance of non-neurological older adults on the Wisconsin Card Sorting Test and the Stroop Color–Word Test: Normal variability or cognitive impairment? Archives of Clinical Neuropsychology , 27, 398– 405. Google Scholar CrossRef Search ADS   Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., & Millis, S. R., Conference Participants 1. ( 2009). American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist , 23, 1093– 1129. Google Scholar CrossRef Search ADS   Heinly, M. T., Greve, K. W., Bianchini, K. J., Love, J. M., & Brennan, A. ( 2005). WAIS digit span-based indicators of malingered neurocognitive dysfunction classification accuracy in traumatic brain injury. Assessment , 12, 429– 444. Google Scholar CrossRef Search ADS   Hilsabeck, R. C., Gordon, S. N., Hietpas-Wilson, T., & Zartman, A. L. ( 2011). Use of Trial 1 of the Test of Memory Malingering (TOMM) as a screening measure of effort: suggested discontinuation rules. The Clinical Neuropsychologist , 25, 1228– 1238. Google Scholar CrossRef Search ADS   Hunt, T. N., Ferrara, M. S., Miller, L. S., & Macciocchi, S. ( 2007). The effect of effort on baseline neuropsychological test scores in high school football athletes. Archives of Clinical Neuropsychology , 22, 615– 621. Google Scholar CrossRef Search ADS   Iverson, G. L., & Tulsky, D. S. ( 2003). Detecting malingering on the WAIS-III: unusual Digit Span performance patterns in the normal population and in clinical groups. Archives of Clinical Neuropsychology , 18 ( 1), 1– 9. Google Scholar CrossRef Search ADS   Lange, R. T., Pancholi, S., Bhagwat, A., Anderson-Barnes, V., & French, L. M. ( 2012). Influence of poor effort on neuropsychological test performance in US military personnel following mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology , 34, 453– 466. Google Scholar CrossRef Search ADS   Larrabee, G. J. ( 2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist , 17, 410– 425. Google Scholar CrossRef Search ADS   Lee, G. P., Loring, D. W., & Martin, R. C. ( 1992). Rey’s 15-item visual memory test for the detection of malingering: Normative observations on patients with neurological disorders. Psychological Assessment , 4, 43. Google Scholar CrossRef Search ADS   Loring, D. W., Lee, G. P., & Meador, K. J. ( 2005). Victoria symptom validity test performance in non-litigating epilepsy surgery candidates. Journal of Clinical and Experimental Neuropsychology , 27, 610– 617. Google Scholar CrossRef Search ADS   Marshall, P., Schroeder, R., O’Brien, J., Fischer, R., Ries, A., Blesi, B., et al.  . ( 2010). Effectiveness of symptom validity measures in identifying cognitive and behavioral symptom exaggeration in adult attention deficit hyperactivity disorder. The Clinical Neuropsychologist , 24, 1204– 1237. Google Scholar CrossRef Search ADS   Mathias, C. W., Greve, K. W., Bianchini, K. J., Houston, R. J., & Crouch, J. A. ( 2002). Detecting malingered neurocognitive dysfunction using the reliable digit span in traumatic brain injury. Assessment , 9, 301– 308. Google Scholar CrossRef Search ADS   McCagh, J., Fisk, J. E., & Baker, G. A. ( 2009). Epilepsy, psychosocial and cognitive functioning. Epilepsy Research , 86 ( 1), 1– 14. Google Scholar CrossRef Search ADS   McKay, C., Casey, J. E., Wertheimer, J., & Fichtenberg, N. L. ( 2007). Reliability and validity of the RBANS in a traumatic brain injured sample. Archives of Clinical Neuropsychology , 22, 91– 98. Google Scholar CrossRef Search ADS   Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. ( 2002). Base rates of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology , 24, 1094– 1102. Google Scholar CrossRef Search ADS   Mojs, E., Gajewska, E., Głowacka, M. D., & Samborski, W. ( 2007). The prevalence of cognitive and emotional disturbances in epilepsy and its consequences for therapy. In Annales Academiae Medicae Stetinensis , 53, 82– 87. Nelson, N. W., Hoelzle, J. B., McGuire, K. A., Ferrier-Auerbach, A. G., Charlesworth, M. J., & Sponheim, S. R. ( 2010). Evaluation context impacts neuropsychological performance of OEF/OIF veterans with reported combat-related concussion. Archives of Clinical Neuropsychology , 25, 713– 723. Google Scholar CrossRef Search ADS   O'Bryant, S. E., Engel, L. R., Kleiner, J. S., Vasterling, J. J., & Black, F. W. ( 2007). Test of memory malingering (TOMM) trial 1 as a screening measure for insufficient effort. The Clinical Neuropsychologist , 21, 511– 521. Google Scholar CrossRef Search ADS   Randolph, C., Tierney, M. C., Mohr, E., & Chase, T. N. ( 1998). The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): Preliminary clinical validity. Journal of Clinical and Experimental Neuropsychology , 20, 310– 319. Google Scholar CrossRef Search ADS   Rees, L. M., Tombaugh, T. N., Gansler, D. A., & Moczynski, N. P. ( 1998). Five validation experiments of the Test of Memory Malingering (TOMM). Psychological Assessment , 10, 10– 19. Google Scholar CrossRef Search ADS   Salazar, X. F., Lu, P. H., Wen, J., & Boone, K. B. ( 2007). The use of effort tests in ethnic minorities and in non-English-speaking and English as a second language populations. In. K. B. Boone (Ed.), Assessment of feigned cognitive impairment: a neuropsychological Perspective  (pp. 405– 427). New York, NY: Guilford. Schroeder, R. W., & Marshall, P. S. ( 2011). Evaluation of the appropriateness of multiple symptom validity indices in psychotic and non-psychotic psychiatric populations. The Clinical Neuropsychologist , 25, 437– 453. Google Scholar CrossRef Search ADS   Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S. ( 2012). Reliable Digit Span: A systematic review and cross-validation study. Assessment , 19, 21– 30. Google Scholar CrossRef Search ADS   Schubert, R. ( 2005). Attention deficit disorder and epilepsy. Pediatric Neurology , 32 ( 1), 1– 10. Google Scholar CrossRef Search ADS   Sharland, M. J., & Gfeller, J. D. ( 2007). A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology , 22, 213– 223. Google Scholar CrossRef Search ADS   Slick, D. J., Sherman, E. M., & Iverson, G. L. ( 1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist , 13, 545– 561. Google Scholar CrossRef Search ADS   Slick, D. J., Tan, J. E., Strauss, E. H., & Hultsch, D. F. ( 2004). Detecting malingering: A survey of experts’ practices. Archives of Clinical Neuropsychology , 19, 465– 473. Google Scholar CrossRef Search ADS   Teichner, G., & Wagner, M. T. ( 2004). The Test of Memory Malingering (TOMM): Normative data from cognitively intact, cognitively impaired, and elderly patients with dementia. Archives of Clinical Neuropsychology , 19, 455– 464. Google Scholar CrossRef Search ADS   Tombaugh, T. N. ( 1996). Test of memory malingering: TOMM . North Tonawanda, NY: Multi-Health Systems. Tombaugh, T. N. ( 1997). The Test of Memory Malingering (TOMM): Normative data from cognitively intact and cognitively impaired individuals. Psychological Assessment , 9, 260. Google Scholar CrossRef Search ADS   Wechsler, D. ( 1997). Wechsler Adult Intelligence Scale. In Administration and scoring manual  ( 3rd ed.). San Antonio, TX: Psychological Corporation. Wechsler, D. ( 2008). Wechsler Adult Intelligence Scale  ( 4th ed.). San Antonio, TX: Pearson. Wechsler, D. ( 2011). WASI-II: Wechsler abbreviated scale of intelligence. In WASI  ( 2nd ed.). Bloomington, MN: Pearson. Weinborn, M., Orr, T., Woods, S. P., Conover, E., & Feix, J. ( 2003). A validation of the Test of Memory Malingering in a forensic psychiatric setting. Journal of Clinical and Experimental Neuropsychology , 25, 979– 990. Google Scholar CrossRef Search ADS   Welsh, A. J., Bender, H. A., Whitman, L. A., Vasserman, M., & MacAllister, W. S. ( 2012). Clinical utility of reliable digit span in assessing effort in children and adolescents with epilepsy. Archives of Clinical Neuropsychology , 27, 735– 741. Google Scholar CrossRef Search ADS   Wisdom, N. M., Brown, W. L., Chen, D. K., & Collins, R. L. ( 2012). The use of all three Test of Memory Malingering trials in establishing the level of effort. Archives of Clinical Neuropsychology , 27, 208– 212. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

Archives of Clinical NeuropsychologyOxford University Press

Published: Apr 5, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off