Medical Evaluation Board Involvement, Non-Credible Cognitive Testing, and Emotional Response Bias in Concussed Service Members

Medical Evaluation Board Involvement, Non-Credible Cognitive Testing, and Emotional Response Bias... Abstract Introduction Military Service Members (SMs) with post-concussive symptoms are commonly referred for further evaluation and possible treatment to Department of Defense Traumatic Brain Injury Clinics where neuropsychological screening/evaluations are being conducted. Understudied to date, the base rates of noncredible task engagement/performance validity testing (PVT) during cognitive screening/evaluations in military settings appears to be high. The current study objectives are to: (1) examine the base rates of noncredible PVTs of SMs undergoing routine clinical or Medical Evaluation Board (MEB) related workups using multiple objective performance-based indicators; (2) determine whether involvement in MEB is associated with PVT or symptom exaggeration/symptom validity testing (SVT) results; (3) elucidate which psychiatric symptoms are associated with noncredible PVT performances; and (4) determine whether MEB participation moderates the relationship between psychological symptom exaggeration and whether or not SM goes on to demonstrate PVTs failures – or vice versa. Materials and Methods Retrospective study of 71 consecutive military concussion cases drawn from a DoD TBI Clinic neuropsychology clinic database. As part of neuropsychological evaluations, patients completed several objective performance-based PVTs and SVT. Results Mean (SD) age of SMs was 36.0 (9.5), ranging from 19–59, and 93% of the sample was male. The self-identified ethnicity resulted in the following percentages: 62% Non-Hispanic White, 22.5% African American, and 15.5% Hispanic or Latino. The majority of the sample (97%) was Active Duty Army and 51% were involved in the MEB at the time of evaluation. About one-third (35.9%) of routine clinical patients demonstrated failure on one or more PVT indicators (12.8% failed 2) while PVT failure rates amongst MEB patients ranged from 15.6% to 37.5% (i.e., failed 2 or 1 PVTs, respectively). Base rates of failures on one or more PVT did not differ between routine clinical versus MEB patients (p = 0.94). MEB involvement was not associated with increased emotional symptom response bias as compared to routine clinical patients. PVT failures were positively correlated with somatization, anxiety, depressive symptoms, suspicious and hostility, atypical perceptions/alienation/subjective cognitive difficulties, borderline personality traits/features, and penchant for aggression in addition to symptom over-endorsement/exaggeration. No differences between routine clinical and MEB patients across other SVT indicators were found. MEB status did not moderate the relationship between any of the SVTs. Conclusion Study results are broadly consistent with the prior published studies that documented low to moderately high base rates of noncredible task engagement during neuropsychological evaluations in military and veteran settings. Results are in contrast to prior studies that have suggested involvement in MEB is associated with increased likelihood of poor PVT performances. This is the first to show that MEB involvement did not enhance/strengthen the association between PVT performances and evidence of SVTs. Consistent with prior studies, these results do highlight that the same SMs who fail PVTs also tend to be the ones who go on to endorse a myriad of psychiatric symptoms and proclivities. Implications of variable or poor task engagement during routine clinical and MEB neuropsychological evaluation in military settings on treatment and disposition planning cannot be overstated. BACKGROUND Between 2001 and 2010, more than 2.1 million military Service Members (SMs) deployed in support of Iraq and Afghanistan military operations.1 Service Members have continued to deploy overseas to these locations since 2010. Depending on which studies are cited, it is estimated that as many as 15–23% of SMs who deployed to Iraq/Afghanistan suffered a mild traumatic brain injury (mTBI; also called concussion).2–5 Since recovery from a concussion in military settings is generally expected in most cases,6 military clinicians and researchers alike have attempted to identify factors involved in the maintenance and/or exacerbation of the physical, cognitive, and emotional sequelae [termed post-concussive symptoms (PCS)] following a recent or remote history of concussion. Service Members with PCS spectrum symptoms are commonly referred to specialized a Department of Defense (DoD) Traumatic Brain Injury or Veteran Affairs Poly Trauma Clinic for further evaluation and possible treatment7 when their recovery is atypical.8 Within this context, neuropsychological screening/comprehensive evaluations are being performed.8–10 Within military health care settings, the prudent clinician maintains an awareness that, for some SMs, incentives exist if found unfit for duty (e.g., avoidance of military duty related work, duty or assignment limitations, disability-compensation awards)11–13 after going thru military Medical Evaluation Board (MEB)/Performance Evaluation Board (PEB). The MEB/PEB, from here on referred to as MEB, is a process by which it is determined whether a SM’s condition interferes with his/her capacity to function in their Military Occupational Specialty (equivalent to civilian Job Series). If judged unfit for duty by MEB the SM will be medically retired from military service and likely eligible for disability related compensation. Thus, it can be argued that MEB involvement in military settings is akin to civilian disability and compensation seeking settings where secondary gain issues are omnipresent. The deliberate fabrication and presentation of symptoms to achieve disability related incentives or adoption of sick roles, defined classically as “malingering” and “factitious disorder” respectively,14 is not a new concern (interested readers can refer to Appendix A for brief account of the history of feigning symptoms and detection methods). Feigning symptoms is a highly contentious topic in military and veteran health care12 but assumed to be a low frequency occurrence in general military health care settings given the nobility of voluntary military service, military values, and/or possible serious legal consequences.15 Relatedly, based upon a regional review of the DoD electronic medical record, the formally documented incidence of malingering or factitious disorder in military settings is estimated to be 2.5 cases/10,000 patient encounters/year or <1% of cases.16 Demographically, malingering in military settings tends to occur in younger, junior enlisted, unwedded male SMs typically outside of deployment.15,16 In contrast, when surveyed, military health care providers often subjectively maintain the clinical opinion that evidence of malingering is in fact more common than objective medical record documentation would indicate – perhaps as high as 10–25% in general practitioner and behavioral health settings where military service is conscripted.17,18 With only a handful of peer-reviewed published studies available to date, the base rates of failures demonstrated on objective performance-based indicators sensitive to cognitive performance validity testing (PVT) in military settings ranges from 8 to 54% of cases.10,19–23 Evaluation context appears to impact these rates with the highest rates evident in SMs involved in MEB.19,20,22,23 These published base rates are generally commensurate with published data from Veteran Affair Medical Center settings where performance validity testing failures are also evident in 17–71% of neuropsychological screenings/evaluations performed.24–28 Thus, available evidence of failure(s) on PVT metrics and resulting noncredible neuropsychological test results would suggest that military neurocognitive malingering/factitious disorder among SMs with history of concussion is in fact quite common.19 Challenges related to interpretation of failures on PVT are relatively new to military health care settings in general19 and DoD traumatic brain injury/VA poly-trauma clinics in particular. For instance, some have argued that PVTs failures represent clear demonstrable psychometric evidence of malingering/factitious disorder10,29 while other’s have suggested that these findings instead may represent signs of severe psychiatric comorbidity30 or even genuine cognitive impairment.31 Underappreciation of the full scope of the problem could be associated with inaccurate diagnosis and clinical misattributions made by well-intentioned clinicians working in military settings who are providing care to SMs with a history of concussion(s). Such inaccurate clinical misattributions can potentially trickle down and lead to a prescription for unnecessary sick leave, inappropriate or unneeded treatments impacting access to care or service utilization for those with legitimate condition(s), possible exposure to deleterious medication side effects, and wasting resources.7,11,19 Given the relative dearth of available information, a call has been made for more research in the area using multiple PVT indicators in an effort to continue to elucidate the scope of the problem as well as identify co-occurring factors/predictors associated with the wide range in variability of failure rates.19 In DoD/military settings, it has been shown that when SMs fail PVTs, they also are likely to demonstrate evidence of elevated psychological symptoms19,23 and/or symptom exaggeration9,20,32 on robust psychological inventories commonly employed in neuropsychological evaluation that are sensitive to response bias/impression management such as the Personality Assessment Inventory (PAI)33 and/or Minnesota Multiphasic Personality Inventory – 2-Restructured Form (MMPI-2-RF).34 In contrast, in military veteran and civilian populations, the relationship between noncredible task engagement/poor PVT performances and psychiatric symptom endorsement or exaggeration is less understood19 in that some studies support an association24,28,35,36 whereas others do not.25,37,38 Given the conflicting evidence in military veteran and civilian research, it is clear that more research is needed to further clarify this relationship in active duty SMs. The current study has multiple objectives. First, this study seeks to examine the base rates of noncredible task engagement rates of SMs undergoing neuropsychological evaluation in a DoD TBI Clinic as part of routine clinical or MEB related workups. This study would add to the growing literature regarding the base rates of poor task engagement during neuropsychological evaluations evidenced in military settings by SMs with alleged history(ies) of concussion.10,20,21,23,24 Second, the present study will ascertain the relative risk of involvement in MEB vs. standard clinical evaluation and likelihood of (1) failure on one or more PVTs and (2) symptom exaggeration/response bias evident on a robust psychological measure (PAI). Four peer-reviewed studies to the authors’ knowledge have suggested that context of the neuropsychological evaluation matters in military settings – in that routine clinical patients appear to be less likely than MEB patients to demonstrate objective evidence of poor effort.19,20,23,24 This is the first study that the authors are aware of that have sought to determine the likelihood of producing non-credible cognitive results and psychological symptom exaggeration in military settings as a function of involvement in MEB. Third, this study will elucidate which psychiatric symptoms as endorsed on the PAI omnibus clinical scales are associated with noncredible PVT performances. To date, as noted previously, these relationships remain an understudied area in the literature. And finally, fourth, the present study will determine whether involvement in MEB moderates the relationship between psychological symptom exaggeration and whether or not SMs go on to demonstrate PVTs failures – or vice versa. Prior studies have examined non-credible rates on symptom report measures as a function of failure on cognitive performance validity testing.9,19,24 This is the first study that the authors are aware of that has endeavored to formally empirically test the possible moderational role of MEB status on whether SM who are dissimulating will go on to invalidate PVTs or demonstrate psychological response bias. METHODS Patient Population This was a retrospective study of 71 consecutive military concussion cases drawn from a neuropsychology clinic database. The data were generated and collected during neuropsychological evaluations performed in a DoD TBI Clinic. Determination of TBI severity sustained by individuals in the database was made by a board certified neuropsychologist. The severity determination was made after reviewing available electronic medical record such as duration of altered mental status/loss of consciousness, presence of neurological soft signs, post-event symptom topography reports, Glasgow Coma Scale scores, Military Acute Concussion Evaluation scores, neuroimaging and retrograde, and/or post-traumatic amnesia in accordance with the classification scheme promulgated by VA/DoD mTBI Clinical Practice Guidelines.39,40 Patients with a history of moderate/severe TBI or other central nervous system disease process were excluded. Measures Cognitive Testing Task Engagement Indicators/PVTs For the study’s statistical analyses, performance on cognitive validity tests was coded as the number of tests failed (0, 1, or 2). It is the standard operating procedure in the TBI Clinic-Neuropsychology Service at Dwight D. Eisenhower Medical Center to administer two or more cognitive PVT procedures – an approach that is consistent with recommendations posited by official position papers published by major neuropsychology professional organizations.41,42 Each of these PVTs is derived from different areas of cognition (i.e., memory and attention based). The likelihood of failing two or more PVTs is very low,43 therefore when an individual fails two or more they are deemed as having demonstrated non-credible task engagement/performance validity and the neuropsychological evaluation is discontinued. Variable effort is demonstrated when an examinee performs below an established cutoff score on just one PVT but otherwise passes the remainder of other indicators. When an examinee passes all PVTs administered to them their degree of task engagement/performance is judged credible. Reliable Digit Span Reliable Digit Span (RDS) is a cognitive performance validity test that originated from the Digit Span subtest of the Wechsler Adult Intelligence Scale-Revised.44 The RDS requires participants to repeat increasingly long strings of numbers. According to a meta-analysis by Schroeder and colleagues,41 the established cutoff score used in the current study demonstrates a mean sensitivity rate of 72% and mean specificity rate of 81% for neurocognitive malingering. Test of Memory Malingering The test of memory malingering (TOMM) is a 50-item visual recognition measure constructed to assist in discriminating between memory malingering and genuine memory impairments.42 The TOMM demonstrated good specificity and sensitivity (greater than 90%) within community, clinical, and “at-risk for malingering” individuals included in TOMM validation studies.42 The TOMM has good internal consistency reliability and convergent validity with the Forced Choice Recognition task.42,45 In the current study, utilized cutoff scores were as published in the TOMM manual and peer reviews. Emotional Response Bias Indicator(s)/SVT Personality assessment inventory Participants also completed the personality assessment inventory (PAI) as part of a battery of neuropsychological assessments during their evaluation. The PAI is a 344-item self-report inventory of psychopathological symptoms (e.g., depression, anxiety, and aggression) for adults.33 The PAI profile provides emotional response bias/validity indices including indicators of inconsistency, unusual symptom endorsement, and/or exaggeration along with a myriad of clinical scales, and subscales. Relative to large normative and clinical samples, T scores equal to or greater than 70 suggest significant symptoms, while scores between 60 and 69 reflect moderate elevations.33 Validity scales have different cutoff scores. The PAI validity and clinical scales demonstrate good test-retest reliability (i.e., 0.73–0.82), adequate internal consistency reliability (i.e., 0.70–0.80), with the majority of the PAI clinical scales demonstrating adequate construct validity36 and discriminatory and convergent validity.33 Study sample average and standard deviations for validity scales, clinical, and subscales are reported. RESULTS Demographic Characteristics See Tables I and II for pertinent demographic information. Average(SD) age of SMs was 36.0(9.5), ranging from 19–59, and 93% identified as male. In terms of ethnicity, 62% of patients self-identified as Non-Hispanic White, 22.5% as African American, and 15.5% as Hispanic or Latino. Average(SD) years of education was 13.3(2.4), ranging from 7 to 20. Ninety-four percent self-identified English as their primary language. Patients from Army, Navy, and Air Force across all components of service are represented in the study sample although the preponderance of data is from Army (97%) Active Duty (79%) SMs. The majority had deployed on at least one occasion [average number of deployments was 2.6(2.0); range 0–16]. Fifty-one percent of the concussed patients were involved in the MEB process at the time of evaluation after having been referred to neuropsychology service to help the Board ascertain whether they were fit or failed retention standards from a cognitive standpoint. There was no difference between MEB and routine clinical sample groups in terms of age, sex, ethnicity, education, primary English speaking, branch or component of service, time in service, or number of deployments. Table I. Patient Demographics for Total Study Sample   N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51            N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51          Table I. Patient Demographics for Total Study Sample   N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51            N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51          Table II. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Validity Tests Failed Across Various Demographics   MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0      MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0    Table II. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Validity Tests Failed Across Various Demographics   MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0      MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0    Study Objective 1: Examining Base Rates of Noncredible PVTs in Clinical Versus MEB Patients As shown in Table III, 35.9% of routine clinical patients undergoing neuropsychological examination demonstrated failure on one or more PVT indicators and 12.8% failed 2 while 37.5% of MEB patients failed 1 and 15.6% failed 2 PVT indicators. Table III. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Performance Validity Tests During Cognitive Testing Failed # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    Table III. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Performance Validity Tests During Cognitive Testing Failed # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    Study Objective 2: Relationship Between MEB Status and Relative Risk of Failure on One or More PVTs and Symptom Exaggeration As depicted in Table III, base rates of failures on one or more PVT did not differ between routine clinical versus MEB patients (p = 0.94). MEB involvement also was not associated with increased odds of demonstrating emotional symptom response bias, as measured by PAI validity scales, as compared with routine clinical patients (See Table IV; all p’s ≥ 0.08). Table IV. Coefficients of MEB Status Predicting Whether a Participant Had an Elevated PAI Score (INC > 72; INF > 74; NIM > 70; PIM > 67; DEF > 70; CDF > 70; MAL > 83, RDF > 60)   b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86    b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86  aJust 2/71 participants had PIM > 67. Unable to calculate meaningful odds ratio. bNo participants had elevated DEF. Table IV. Coefficients of MEB Status Predicting Whether a Participant Had an Elevated PAI Score (INC > 72; INF > 74; NIM > 70; PIM > 67; DEF > 70; CDF > 70; MAL > 83, RDF > 60)   b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86    b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86  aJust 2/71 participants had PIM > 67. Unable to calculate meaningful odds ratio. bNo participants had elevated DEF. Study Objective 3: Emotional Symptom Correlates of PVTs Performance See Table V for correlation matrix for PAI clinical scales/subscales and PVT performances. Across the total sample, increasing evidence of PVT failures was positively correlated with increased preoccupation with health and physical functioning [i.e., PAI Somatic Complaints (SOM) scale; r = 0.42, p < 0.001], generalized tension and anxiety [i.e., PAI Anxiety (ANX) scale; r = 0.55, p < 0.001], anxiety-related disorders [i.e., PAI Anxiety Related Disorders (ARD) scale; r = 0.45, p < 0.001], unipolar depressive symptoms [i.e., PAI Depression (DEP) clinical scale; r = 0.42, p < 0.001], suspicious and hostile presentation [i.e., PAI Paranoia (PAR) clinical scale; r = 0.39, p < 0.01], endorsement of atypical perceptions/alienation/subjective cognitive difficulties [i.e., PAI Schizophrenia (SCZ) scale; r = 0.44, p < 0.001], borderline personality traits/features [i.e., PAI Borderline (BOR) scale; r = 0.32, p < 0.01], along with a penchant for aggression [i.e., PAI Aggression (AGG) scale; r = 0.29, p < 0.05]. In contrast, there were no significant association between increasing PVT failures and hypomania/mania, antisocial traits, alcohol or other drug problems, and suicidal ideation [i.e., PAI Mania (MAN), Antisocial (ANT), Alcohol (ALC), Drug Problems (DRG), and Suicide (SUI) clinical scales; all p’s > 0.05]. Table V. Pearson Correlations Between Number of PVTs Failed and PAI Scale; and Independent Sample t-Tests Between MEB vs. PAI Scales and Subscales   PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)      PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)    * p < 0.05; **p < 0.01; ***p < 0.001. Table V. Pearson Correlations Between Number of PVTs Failed and PAI Scale; and Independent Sample t-Tests Between MEB vs. PAI Scales and Subscales   PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)      PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)    * p < 0.05; **p < 0.01; ***p < 0.001. Routine clinical and MEB patients differed across their degree of symptom endorsement involving somatic focus (i.e., PAI Somatic scale; MEB patients > routine clinical), anxiety (i.e., PAI ANX clinical scale; MEB patients > routine clinical), and drug-related problems (i.e., PAI DRG clinical scale; Routine clinical > MEB) but not across other omnibus clinical scales (see Table V). Study Objective 4: Moderational Role of MEB Status on Failure of PVT and Symptom Exaggeration Across the total sample, as displayed in Table V, an increasing number of PVT failures was moderately and positively correlated with increased symptom over-endorsement/exaggeration on the PAI [i.e., Negative Impression Management (NIM) scale, Malingering Supplemental Index (MAL), Rogers Discrimination Supplemental Index (RDF); r’s ranged from = 0.35–38; all p’s ≤ 0.01] but not indicators of inconsistency, low frequency symptom endorsement, or defensiveness [i.e., PAI Inconsistency (INC), Infrequency (INF), Positive Impression Management (PIM) scales, also Defensiveness Supplemental Index (DI), Cashel Discriminant Function (CDF) Supplemental Index; all p’s < 0.05]. Routine clinical patients did demonstrate less bizarre symptom over-endorsement (i.e., NIM) than MEB patients [i.e., T-score mean(SD) were 58.2(10.2) versus 67.3(20.6); p ≤ 0.05], however, no differences between routine clinical and MEB patients were found across other PAI indicators of response bias. Furthermore, the MEB NIM mean was below the cutoff of 70 indicating, sub-threshold negative impression management. To determine if MEB status (involved in or not) moderated the relationship between PVT and symptom exaggeration indicators, a series of regression analyses were performed. MEB status did not moderate the relationship between any of the validity scales (i.e., INC, INF, NIM, PIM, DI, CDF, MAL, or RDF scales on PAI) and PVT (all p’s > 0.05). In other words, the relationship between PVT and these PAI symptom exaggeration scales did not depend on MEB involvement. DISCUSSION The current study had several aims. First, we sought to examine the base rates of noncredible task engagement/poor effort rates in SMs using two indicators of PVT. Second, we wanted to determine if involvement in MEB elevated likelihood of noncredible task engagement and psychological symptom exaggeration. Third, we worked to elucidate the relationships between psychological symptom endorsement and noncredible PVT performances. And finally, we ascertained whether or not MEB moderates the relationship of PVT failures and psychological symptom exaggeration. Variable or poor task engagement was common across both routine clinical and MEB patients. Indeed, a little over a third of our routine clinical and MEB patients demonstrated what could be considered variable task engagement (as defined as failure on one PVT). Approximately one in six SMs evaluated in our DOD TBI Clinic during a 12-mo period demonstrated noncredible task engagement (i.e., failed two PVTs), rendering their neurocognitive data invalid. Perhaps surprisingly, MEB involvement did not increase the likelihood of psychological response bias. However, increasing numbers of PVT failures were in fact moderately correlated with selected response bias indicators suggestive of increased symptom exaggeration. Moderate positive associations between PVT performance and several PAI clinical scales were demonstrated. Performance validity test performance was associated with PAI clinical scales measuring somatic focused concerns, generalized tension and anxiety, anxiety-related disorders, depressotypic symptoms, suspiciousness/hostility, subjective cognitive difficulties and atypical perceptions, borderline personality features, along with a penchant for aggression. MEB patients did tend to endorse greater somatic focus and anxiety than did routine clinical patients. In our sample, involvement in MEB did not moderate the relationships between PVT performance and psychological symptom exaggeration. Our study contributes to the growing body of neurocognitive validity research among military and veteran populations. Our results are broadly consistent with prior published studies that have previously documented a range of base rates (low to moderately high) of poor task engagement during neuropsychological evaluations in military and veteran settings.10,19,20,23–25,28,29,46 Our results stand in contrast to prior studies that have suggested involvement in MEB confers increased likelihood of poor PVT performance.19,20,23,24 Consistent with prior studies,30 our results do highlight that the same SMs who fail PVTs also tend to endorse a myriad of psychiatric symptoms (e.g., somatic focus, anxiety, depressotypic, subjective cognitive difficulties) and proclivities (e.g., hostility, borderline personality traits, aggressive). This study also is the first to show that MEB involvement did not strengthen the association between PVT performances and evidence of psychological symptom response bias. Study Limitations A number of limitations of the current study should be taken into account when considering the present findings. First, the demographic makeup of the sample is not representative of the general population of the military. According to Defense Manpower Research,47 active duty U.S. military has a larger Non-Hispanic White population (75% vs. 69% in our sample), as well as a much larger proportion of females (29% vs. 9% in our sample) compared with the current sample. Thus, results of the study may not apply to female SMs given the small number of female participants. Having said that, males are more common in combat military occupational specialties and concussion is a common occupational hazard during deployment (recall, the majority of our study sample had deployed more than once). Also, due to the nature of the study and its use of a pre-existing data set that was comprised of consecutively referred patients, the authors had no control over the variability of demographic characteristics in the final sample. The final sample size was affected by exclusion of a number of participants who did not meet inclusion criteria for the study. Second, another limitation to the current study is that severity of TBI was made by clinician judgment instead of reliance on research-based structured interview such as the Ohio State TBI Identification Method.48 In that end, the severity of remote history of TBI was graded by a board certified neuropsychologist with >9 yr of post-doctoral experience working with military concussion patients. Traumatic brain injury severity was judged after reviewing available medical documentation, conducting a diagnostic clinical interview where alleged civilian and military related concussion history(ies) and peri-event sequelae were explored, and subsequent staging of presumed TBIs in accordance with VA/DoD TBI Clinical Practice Guidelines.39,40 Third, in hindsight, the present study findings would likely have been enhanced had we been in a position to clearly ascertain SM career intention data such as self-reported career goals, motivation to continue, and prediction of MEB or clinical outcomes. Such data could have helped elucidate implicit/explicit motives rather than relying solely on psychometric evidence to pinpoint cases of probable poor effort/faking bad. Having said that, other studies have conceptually29 and methodologically10 adopted a similar interpretive stance as the one employed in the current study.10,19,20,23,24 Military Relevance and Conclusion Implications of variable or poor task engagement during routine clinical and MEB neuropsychological evaluation in military settings on treatment and disposition planning cannot be overstated. While SMs cannot reliably perform better on neuropsychological/cognitive testing than they truly are capable of (unless they are cheating or have been coached on how to fake good), they can surely perform below expectations on objective performance-based testing as a result of bona fide cognitive problems and/or variable motivation and effort, or deliberate faking bad. Said simply, in the presence of noncredible cognitive findings and failure to objectively corroborate subjective cognitive complaints, it is hard to ethically justify extensive cognitive rehabilitation, initiating a trial of cognitively enhancing pharmacotherapeutics, offering material resources, and/or assigning a disability rating and awarding compensation for subjective cognitive dysfunction. Another important conclusion from the present study is that SM involvement in MEB proceedings, a scenario that is arguably analogous to both civilian and veteran disability and compensation seeking, did not further elevate the likelihood of demonstrating noncredible task engagement. When considered within the setting of other contradictory studies conducted to date, involvement in MEB per se may not necessarily elevate the likelihood of neurocognitive malingering over that which is seen in routine clinical practice. Instead, our results indicate that it is the SMs who are endorsing a myriad of subjective difficulties involving their physical health and functioning, anxiety, and/or depression are commonly the ones who go on to demonstrate noncredible engagement in the testing process. At our DoD TBI Clinic, our staff tends to give SMs the benefit of the doubt when noncredible cognitive findings are demonstrated. That is, instead of defaulting to a diagnosis of neurocognitive malingering, consistent with other research,9 our providers re-focus their clinical attention to addressing the complicating clinical factors that can either account for subjective cognitive difficulties and/or represent barriers to sitting and participating in a neurocognitive evaluation – such as chronic pain, sleep disturbance, and severe psychopathology. This later approach is not novel, as others have highlighted that severe psychiatric comorbidities may be a common cause of variable or poor effort during neuropsychological evaluation.29 Future studies should endeavor to replicate our findings in a more demographically diverse military sample with inclusion of more formalized TBI grading systematic approaches and assessment of subjective self-report career intention data. Supplementary Material Supplementary material is available at Military Medicine online. Acknowledgments The authors are grateful for the constructive criticisms, suggested edits, and thought provoking comments from the Military Medicine Editorial Staff and peer Reviewers. References 1 Committee on the Assessment of the Readjustment Needs of Military Personnel, Veterans, and Their Families; Board on the Health of Select Populations; Institute of Medicine: Returning Home from Iraq and Afghanistan: Assessment of Readjustment Needs of Veterans, Service Members, and Their Families . Washington (DC), National Academies Press (US), 2013. 2 Hoge CW, McGurk D, Thomas JL, Cox AL, Engel CC, Castro CA: Mild traumatic brain injury in U.S. Soldiers returning from Iraq. N Engl J Med  2008; 358: 453– 63. Google Scholar CrossRef Search ADS PubMed  3 Terrio H, Brenner LA, Ivins BJ, Cho JM, Helmick K, Schwab K, Scally K, Bretthauer R, Warden D: Traumatic brain injury screening: preliminary findings in a US Army brigade combat team. J Head Trauma Rehabil  2009; 24: 14– 23. Google Scholar CrossRef Search ADS PubMed  4 Wilk JE, Thomas JL, McGurk DM, Riviere LA, Castro CA, Hoge CW: Mild traumatic brain injury (concussion) during combat: lack of association of blast mechanisms with persistent post concussive symptoms. J Head Trauma Rehabil  2010; 25: 9– 14. Google Scholar CrossRef Search ADS PubMed  5 Wilk JE, Herrell RK, Wynn GH, Riviere LA, Hoge CW: Mild traumatic brain injury (concussion), posttraumatic stress disorder, and depression in U.S. soldiers involved in combat deployments: association with post deployment symptoms. Psychosom Med  2012; 74: 249– 57. Google Scholar CrossRef Search ADS PubMed  6 Lange R, Brickell TA, French LM, et al.  : Neuropsychological outcomes from uncomplicated mild, mild, and moderate traumatic brain injury in U.S. military. Arch Clin Neuropsychol  2012; 27: 480– 94. Google Scholar CrossRef Search ADS PubMed  7 Hoge CW, Goldberg HM, Castro CA: Care of war veterans with mild traumatic brain injury – flawed perspectives. N Engl J Med  2009; 360: 1588– 91. Google Scholar CrossRef Search ADS PubMed  8 McCrea M, Pliskin N, Barth J, et al.  : Official position of the military TBI task force on the role of neuropsychology and rehabilitation psychology in the evaluation, management, and research of military veterans with traumatic brain injury. Clin Neuropsychol  2008; 22: 10– 26. Google Scholar CrossRef Search ADS PubMed  9 Jones A, Ingram MV, Ben-Porath YS: Scores on the MMPI-2-RF scales as a function of increasing levels of failure on cognitive symptom validity tests in a military sample. Clin Neuropsychol  2012; 26: 790– 815. Google Scholar CrossRef Search ADS PubMed  10 Jones A: Test of memory malingering: cutoff scores for psychometrically defined malingering groups in a military sample. Clin Neuropsychol  2013; 27: 1043– 1059. Google Scholar CrossRef Search ADS PubMed  11 Tekin L, Tuncer SK, Akarsu S, Eroglu M: A young military recruit with unilateral atypical swelling of his left arm: malingering revisited. J Emerg Med  2013; 45: 714– 717. Google Scholar CrossRef Search ADS PubMed  12 Schnellbacher S, O’Mara H: Identifying and managing malingering and factitious disorder in the military. Curr Psychiatry Rep  2016; 18: 105– 112. Google Scholar CrossRef Search ADS PubMed  13 Lippa SM, Lange RT, French LM, Iverson GL: Performance validity, neurocognitive disorder, and post-concussion symptom reporting in service members with a history of mild traumatic brain injury. Arch Clin Neuropsychol  2017; 21: 1– 13. Google Scholar CrossRef Search ADS   14 American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders , 5th ed., Arlington, VA, American Psychiatric Publishing, 2013. 15 Armed Forces Health Surveillance Center: Malingering and factitious disorders and illness, active component, U.S. armed forces, 1998–2012. MSMR  2013; 20: 20– 24. 16 Lande G, Williams LB: Prevalence and characteristics of military malingering. Mil Med  2013; 178: 50– 4. Google Scholar CrossRef Search ADS PubMed  17 Iancu I, Ben-Yehuda Y, Yazvitzky R, Rosen Y, Knobler HY: Attitudes toward malingering: a study among general practitioners and mental health officers in the military. Med Law  2003; 22: 373– 389. Google Scholar PubMed  18 Kokcu AT, Kurt E: General practitioners’ approach to malingering in basic military training centers. J R Army Med Corps  2017; 163: 119– 123. Google Scholar CrossRef Search ADS PubMed  19 Armistead-Jehle P, Buican B: Evaluation context and symptom validity test performances in a U.S. military sample. Arch Clin Neuropsychol  2012; 27: 828– 839. Google Scholar CrossRef Search ADS PubMed  20 Armistead-Jehle P, Hansen CL: Comparison of the repeatable battery for the assessment of neuropsychological status effort index and stand-alone symptom validity tests in a military sample. Arch Clin Neuropsychol  2011; 26: 592– 601. Google Scholar CrossRef Search ADS PubMed  21 Cooper DB, Vanderploeg RD, Armistead-Jehle P, Lewis JD, Bowles AO: Factors associated with neurocognitive performance in OIF/OEF service members with postconcussive complaints in postdeployment clinical settings. J Rehabil Res Dev  2014; 51: 1023– 1034. Google Scholar CrossRef Search ADS PubMed  22 Grills CE, Armistead-Jehle P: Performance validity test and neuropsychological assessment battery screening module performances in an active-duty sample with a history of concussion. Appl Neuropsychol Adult  2016; 23: 295– 301. Google Scholar CrossRef Search ADS PubMed  23 Lange RT, Pancholi S, Bhagwat A, Anderson-Barnes V, French LM: Influence of poor effort on neuropsychological test performance in U.S. military personnel following mild traumatic brain injury. J Clin Exp Neuropsychol  2012; 34: 453– 466. Google Scholar CrossRef Search ADS PubMed  24 Armistead-Jehle P: Symptom validity test performance in US veterans referred for evaluation of mild TBI. Appl Neuropsychol  2010; 17: 52– 9. Google Scholar CrossRef Search ADS PubMed  25 Jurick SM, Twamley EW, Crocker LD, Hays CC, Orff HJ, Golshan S, Jak AJ: Postconcussive symptom over reporting in Iraq/Afghanistan Veterans with mild traumatic brain injury. J Rehabil Res Dev  2016; 53: 571– 584. Google Scholar CrossRef Search ADS PubMed  26 Nelson NW, Hoelzle JB, McGuiire KA, Ferrier-Auerback AG, Charlesworth MJ, Sponheim SR: Evaluation context impacts neuropsychological performance of OEF/OIF veterans with reported combat-related concussion. Arch Clin Neuropsychol  2010; 25: 713– 723. Google Scholar CrossRef Search ADS PubMed  27 Whitney KA, Davis JJ, Shepard PH, Herman SM: Utility of the Response Bias Scale (RBS) and other MMPI-2 validity scales in predicting TOMM performance. Arch Clin Neuropsychol  2008; 23: 777– 786. Google Scholar CrossRef Search ADS PubMed  28 Young JC, Kerns LA, Roper BL: Validation of the MMPI-2 Response Bias Scale and Henry-Heilbronner Index in a U.S. veteran population. Arch Clin Neuropsychol  2011; 26: 194– 204. Google Scholar CrossRef Search ADS PubMed  29 Slick DJ, Sherman EM, Iverson GL: Diagnostic criteria for malingered neurocognitive dysfunction: proposed standards for clinical practice and research. Clin Neuropsychol  1999; 13: 545– 561. Google Scholar CrossRef Search ADS PubMed  30 Marx BP, Miller MW, Sloan DM, Litz BT, Kalupek DG, Keane TM: Military-related PTSD, current disability policies, and malingering. Am J Public Health  2008; 98: 773– 4. Google Scholar CrossRef Search ADS PubMed  31 Willis PF, Farrer TJ, Bigler ED: Are effort measures sensitive to cognitive impairment? Mil Med  2011; 175: 1426– 31. Google Scholar CrossRef Search ADS   32 Armistead-Jehle P, Cooper DB, Vanderploeg RD: The role of performance validity tests in the assessment of cognitive functioning after military concussion: a replication and extension. Appl Neuropsychol Adult  2016; 23: 264– 273. Google Scholar CrossRef Search ADS PubMed  33 Morey LC: Personality Assessment Inventory . Odessa, FL, Psychological Assessment Resources, 1991. 34 Ben-Porath YS, Tellegen A: MMPI-2-RF (Minnesota Multiphasic Personality Inventory-2-Restructured Form) Manual for Administration, Scoring, and Interpretation . Minneapolis, MN, University of Minnesota Press, 2008. 35 Wygant DB, Sellbom M, Ben-Porath YS, Stafford KP, Freeman DB, Heilbronner RL: The relation between symptoms validity testing and MMPI-2 scores as a function of forensic evaluation context. Arch Clin Neuropsychol  2007; 22: 489– 499. Google Scholar CrossRef Search ADS PubMed  36 Haggerty KA, Frazier TW, Busch RM, Naugle RI: Relationships among Victoria symptom validity test indices and personality assessment inventory validity scales in a large clinical sample. Clin Neuropsychol  2007; 21: 917– 928. Google Scholar CrossRef Search ADS PubMed  37 Demakis GJ, Gervais RO, Rohling ML: The effect of failure on cognitive and psychological symptoms validity tests in litigants with symptoms of post-traumatic stress disorder. Clin Neuropsychol  2008; 22: 879– 895. Google Scholar CrossRef Search ADS PubMed  38 Sumanti M, Boone KB, Savodnik I, Gorsuch R: Noncredible psychiatric and cognitive symptoms in workers’ compensation ‘stress’ claim sample. Clin Neuropsychol  2006; 22: 1080– 1092. 39 VA/DoD clinical practice guideline for management of concussion/mild traumatic brain injury (mTBI). 2009. Retrieved July 17, 2016, Available at http://www.healthquality.va.gov/guidelines/Rehab/mtbi/concussion_mtbi_full_1_0.pdf. 40 VA/DoD clinical practice guideline for management of concussion-mild traumatic brain injury. 2016. Retrieved July 17, 2016, Available at http://www.healthquality.va.gov/guidelines/Rehab/mtbi/mTBICPGFullCPG50821816.pdf. 41 Schroeder RW, Twumasi-Ankrah P, Baade LE, Marshall PS: Reliable Digit Span: a systematic review and cross-validation study. Assessment  2012; 19: 21– 30. Google Scholar CrossRef Search ADS PubMed  42 Tombaugh TN: Test of Memory and Malingering . North Tonawanda, NY, Multi-Health Systems, 1996. 43 Larabee GJ: Aggregation across multiple indicators improves the detection of malingering: relationship to likelihood ratios. Clin Neuropsychol  2008; 22: 666– 679. Google Scholar CrossRef Search ADS PubMed  44 Wechsler D: WAIS-R: Wechsler Adult Intelligence Scale-Revised . New York, N.Y., Psychological Corporation, 1981. 45 Moore BA, Donders J: Predictors of invalid neuropsychological test performance after traumatic brain injury. Brain Inj  2004; 18: 975– 984. Google Scholar CrossRef Search ADS PubMed  46 Whitney KA, Shepard PH, Williams AL, Davis JJ, Adams KM: The Medical Symptom Validity Test in the evaluation of Operation Iraqi Freedom/Operation Enduring Freedom soldiers: a preliminary study. Arch Clin Neuropsychol  2009; 24: 145– 152. Google Scholar CrossRef Search ADS PubMed  47 Defense Manpower Research. Demographics of active duty U.S. military. 2013 November 23. Available at http://www.statisticbrain.com/demographics-of-active-duty-u-s-military/. 48 Corrigan JD, Bogner J: Initial reliability and validity of the Ohio State University TBI identification method. J Head Trauma Rehabil  2007; 22: 318– 329. Google Scholar CrossRef Search ADS PubMed  49 Heilbronner RL, Sweet JJ, Morgan JE, Larrabee G, Millis SR, et al.  : Conference Participants: American Academy of Clinical Neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering. Clin Neuropsychol  2009; 23: 1093– 1129. Google Scholar CrossRef Search ADS PubMed  50 Slavin-Mulford J, Sinclair SJ, Stein M, Malone J, Bello I, Blais MA: External validity of the Personality Assessment Inventory (PAI) in a clinical sample. J Pers Assess  2012; 94: 593– 600. Google Scholar CrossRef Search ADS PubMed  51 McDermott BE: Psychological testing and the assessment of malingering. Psychiatr Clin N Am  2012; 35: 855– 876. Google Scholar CrossRef Search ADS   52 Schnellbacher S, O’Mara H: Identifying and managing malingering and factitious disorder in the military. Curr Psychiatry Rep  2016; 18: 105. Google Scholar CrossRef Search ADS PubMed  53 Connor H: The use of anesthesia to diagnose malingering in the 19th century. J R Soc Med  2006; 99: 444– 447. Google Scholar CrossRef Search ADS PubMed  54 Anderson DL, Anderson GT: Nostalgia and malingering in the military during the Civil War. Perspect Biol Med  1984; 28: 156– 166. Google Scholar CrossRef Search ADS PubMed  55 Article 115. Malingering, US Code Service. Uniform Code of Military Justice. 56 Resnick PJ: Defrocking the fraud: the detection of malingering. Isr J Psychiatry Relat Sci  1993; 30: 93– 101. Google Scholar PubMed  57 Bass C, Halligan P: Factitious disorders and malingering: challenges for clinical assessment and management. Lancet  2014; 383: 1422– 1432. Google Scholar CrossRef Search ADS PubMed  58 Rogers R, Kropp PR, Bagby RM, Dickens SE: Faking specific disorders: a study of the structured interview of reported symptoms. J Clin Psychol  1992; 48: 653– 648. 59 Guriel J, Fremouw W: Assessing malingered posttraumatic stress disorder: a critical review. Clin Psychol Rev  2003; 23: 881– 904. Google Scholar CrossRef Search ADS PubMed  Author notes The views expressed are solely those of the authors and do not reflect the official policy or position of the US Army, the Department of Defense, US Government, Dwight D. Eisenhower Army Medical Center TBI Clinic / Neuroscience & Rehabilitation Center, or University of South Carolina – Aiken. © Association of Military Surgeons of the United States 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Military Medicine Oxford University Press

Medical Evaluation Board Involvement, Non-Credible Cognitive Testing, and Emotional Response Bias in Concussed Service Members

Loading next page...
 
/lp/ou_press/medical-evaluation-board-involvement-non-credible-cognitive-testing-lJw0kzPfLG
Publisher
Oxford University Press
Copyright
© Association of Military Surgeons of the United States 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ISSN
0026-4075
eISSN
1930-613X
D.O.I.
10.1093/milmed/usy038
Publisher site
See Article on Publisher Site

Abstract

Abstract Introduction Military Service Members (SMs) with post-concussive symptoms are commonly referred for further evaluation and possible treatment to Department of Defense Traumatic Brain Injury Clinics where neuropsychological screening/evaluations are being conducted. Understudied to date, the base rates of noncredible task engagement/performance validity testing (PVT) during cognitive screening/evaluations in military settings appears to be high. The current study objectives are to: (1) examine the base rates of noncredible PVTs of SMs undergoing routine clinical or Medical Evaluation Board (MEB) related workups using multiple objective performance-based indicators; (2) determine whether involvement in MEB is associated with PVT or symptom exaggeration/symptom validity testing (SVT) results; (3) elucidate which psychiatric symptoms are associated with noncredible PVT performances; and (4) determine whether MEB participation moderates the relationship between psychological symptom exaggeration and whether or not SM goes on to demonstrate PVTs failures – or vice versa. Materials and Methods Retrospective study of 71 consecutive military concussion cases drawn from a DoD TBI Clinic neuropsychology clinic database. As part of neuropsychological evaluations, patients completed several objective performance-based PVTs and SVT. Results Mean (SD) age of SMs was 36.0 (9.5), ranging from 19–59, and 93% of the sample was male. The self-identified ethnicity resulted in the following percentages: 62% Non-Hispanic White, 22.5% African American, and 15.5% Hispanic or Latino. The majority of the sample (97%) was Active Duty Army and 51% were involved in the MEB at the time of evaluation. About one-third (35.9%) of routine clinical patients demonstrated failure on one or more PVT indicators (12.8% failed 2) while PVT failure rates amongst MEB patients ranged from 15.6% to 37.5% (i.e., failed 2 or 1 PVTs, respectively). Base rates of failures on one or more PVT did not differ between routine clinical versus MEB patients (p = 0.94). MEB involvement was not associated with increased emotional symptom response bias as compared to routine clinical patients. PVT failures were positively correlated with somatization, anxiety, depressive symptoms, suspicious and hostility, atypical perceptions/alienation/subjective cognitive difficulties, borderline personality traits/features, and penchant for aggression in addition to symptom over-endorsement/exaggeration. No differences between routine clinical and MEB patients across other SVT indicators were found. MEB status did not moderate the relationship between any of the SVTs. Conclusion Study results are broadly consistent with the prior published studies that documented low to moderately high base rates of noncredible task engagement during neuropsychological evaluations in military and veteran settings. Results are in contrast to prior studies that have suggested involvement in MEB is associated with increased likelihood of poor PVT performances. This is the first to show that MEB involvement did not enhance/strengthen the association between PVT performances and evidence of SVTs. Consistent with prior studies, these results do highlight that the same SMs who fail PVTs also tend to be the ones who go on to endorse a myriad of psychiatric symptoms and proclivities. Implications of variable or poor task engagement during routine clinical and MEB neuropsychological evaluation in military settings on treatment and disposition planning cannot be overstated. BACKGROUND Between 2001 and 2010, more than 2.1 million military Service Members (SMs) deployed in support of Iraq and Afghanistan military operations.1 Service Members have continued to deploy overseas to these locations since 2010. Depending on which studies are cited, it is estimated that as many as 15–23% of SMs who deployed to Iraq/Afghanistan suffered a mild traumatic brain injury (mTBI; also called concussion).2–5 Since recovery from a concussion in military settings is generally expected in most cases,6 military clinicians and researchers alike have attempted to identify factors involved in the maintenance and/or exacerbation of the physical, cognitive, and emotional sequelae [termed post-concussive symptoms (PCS)] following a recent or remote history of concussion. Service Members with PCS spectrum symptoms are commonly referred to specialized a Department of Defense (DoD) Traumatic Brain Injury or Veteran Affairs Poly Trauma Clinic for further evaluation and possible treatment7 when their recovery is atypical.8 Within this context, neuropsychological screening/comprehensive evaluations are being performed.8–10 Within military health care settings, the prudent clinician maintains an awareness that, for some SMs, incentives exist if found unfit for duty (e.g., avoidance of military duty related work, duty or assignment limitations, disability-compensation awards)11–13 after going thru military Medical Evaluation Board (MEB)/Performance Evaluation Board (PEB). The MEB/PEB, from here on referred to as MEB, is a process by which it is determined whether a SM’s condition interferes with his/her capacity to function in their Military Occupational Specialty (equivalent to civilian Job Series). If judged unfit for duty by MEB the SM will be medically retired from military service and likely eligible for disability related compensation. Thus, it can be argued that MEB involvement in military settings is akin to civilian disability and compensation seeking settings where secondary gain issues are omnipresent. The deliberate fabrication and presentation of symptoms to achieve disability related incentives or adoption of sick roles, defined classically as “malingering” and “factitious disorder” respectively,14 is not a new concern (interested readers can refer to Appendix A for brief account of the history of feigning symptoms and detection methods). Feigning symptoms is a highly contentious topic in military and veteran health care12 but assumed to be a low frequency occurrence in general military health care settings given the nobility of voluntary military service, military values, and/or possible serious legal consequences.15 Relatedly, based upon a regional review of the DoD electronic medical record, the formally documented incidence of malingering or factitious disorder in military settings is estimated to be 2.5 cases/10,000 patient encounters/year or <1% of cases.16 Demographically, malingering in military settings tends to occur in younger, junior enlisted, unwedded male SMs typically outside of deployment.15,16 In contrast, when surveyed, military health care providers often subjectively maintain the clinical opinion that evidence of malingering is in fact more common than objective medical record documentation would indicate – perhaps as high as 10–25% in general practitioner and behavioral health settings where military service is conscripted.17,18 With only a handful of peer-reviewed published studies available to date, the base rates of failures demonstrated on objective performance-based indicators sensitive to cognitive performance validity testing (PVT) in military settings ranges from 8 to 54% of cases.10,19–23 Evaluation context appears to impact these rates with the highest rates evident in SMs involved in MEB.19,20,22,23 These published base rates are generally commensurate with published data from Veteran Affair Medical Center settings where performance validity testing failures are also evident in 17–71% of neuropsychological screenings/evaluations performed.24–28 Thus, available evidence of failure(s) on PVT metrics and resulting noncredible neuropsychological test results would suggest that military neurocognitive malingering/factitious disorder among SMs with history of concussion is in fact quite common.19 Challenges related to interpretation of failures on PVT are relatively new to military health care settings in general19 and DoD traumatic brain injury/VA poly-trauma clinics in particular. For instance, some have argued that PVTs failures represent clear demonstrable psychometric evidence of malingering/factitious disorder10,29 while other’s have suggested that these findings instead may represent signs of severe psychiatric comorbidity30 or even genuine cognitive impairment.31 Underappreciation of the full scope of the problem could be associated with inaccurate diagnosis and clinical misattributions made by well-intentioned clinicians working in military settings who are providing care to SMs with a history of concussion(s). Such inaccurate clinical misattributions can potentially trickle down and lead to a prescription for unnecessary sick leave, inappropriate or unneeded treatments impacting access to care or service utilization for those with legitimate condition(s), possible exposure to deleterious medication side effects, and wasting resources.7,11,19 Given the relative dearth of available information, a call has been made for more research in the area using multiple PVT indicators in an effort to continue to elucidate the scope of the problem as well as identify co-occurring factors/predictors associated with the wide range in variability of failure rates.19 In DoD/military settings, it has been shown that when SMs fail PVTs, they also are likely to demonstrate evidence of elevated psychological symptoms19,23 and/or symptom exaggeration9,20,32 on robust psychological inventories commonly employed in neuropsychological evaluation that are sensitive to response bias/impression management such as the Personality Assessment Inventory (PAI)33 and/or Minnesota Multiphasic Personality Inventory – 2-Restructured Form (MMPI-2-RF).34 In contrast, in military veteran and civilian populations, the relationship between noncredible task engagement/poor PVT performances and psychiatric symptom endorsement or exaggeration is less understood19 in that some studies support an association24,28,35,36 whereas others do not.25,37,38 Given the conflicting evidence in military veteran and civilian research, it is clear that more research is needed to further clarify this relationship in active duty SMs. The current study has multiple objectives. First, this study seeks to examine the base rates of noncredible task engagement rates of SMs undergoing neuropsychological evaluation in a DoD TBI Clinic as part of routine clinical or MEB related workups. This study would add to the growing literature regarding the base rates of poor task engagement during neuropsychological evaluations evidenced in military settings by SMs with alleged history(ies) of concussion.10,20,21,23,24 Second, the present study will ascertain the relative risk of involvement in MEB vs. standard clinical evaluation and likelihood of (1) failure on one or more PVTs and (2) symptom exaggeration/response bias evident on a robust psychological measure (PAI). Four peer-reviewed studies to the authors’ knowledge have suggested that context of the neuropsychological evaluation matters in military settings – in that routine clinical patients appear to be less likely than MEB patients to demonstrate objective evidence of poor effort.19,20,23,24 This is the first study that the authors are aware of that have sought to determine the likelihood of producing non-credible cognitive results and psychological symptom exaggeration in military settings as a function of involvement in MEB. Third, this study will elucidate which psychiatric symptoms as endorsed on the PAI omnibus clinical scales are associated with noncredible PVT performances. To date, as noted previously, these relationships remain an understudied area in the literature. And finally, fourth, the present study will determine whether involvement in MEB moderates the relationship between psychological symptom exaggeration and whether or not SMs go on to demonstrate PVTs failures – or vice versa. Prior studies have examined non-credible rates on symptom report measures as a function of failure on cognitive performance validity testing.9,19,24 This is the first study that the authors are aware of that has endeavored to formally empirically test the possible moderational role of MEB status on whether SM who are dissimulating will go on to invalidate PVTs or demonstrate psychological response bias. METHODS Patient Population This was a retrospective study of 71 consecutive military concussion cases drawn from a neuropsychology clinic database. The data were generated and collected during neuropsychological evaluations performed in a DoD TBI Clinic. Determination of TBI severity sustained by individuals in the database was made by a board certified neuropsychologist. The severity determination was made after reviewing available electronic medical record such as duration of altered mental status/loss of consciousness, presence of neurological soft signs, post-event symptom topography reports, Glasgow Coma Scale scores, Military Acute Concussion Evaluation scores, neuroimaging and retrograde, and/or post-traumatic amnesia in accordance with the classification scheme promulgated by VA/DoD mTBI Clinical Practice Guidelines.39,40 Patients with a history of moderate/severe TBI or other central nervous system disease process were excluded. Measures Cognitive Testing Task Engagement Indicators/PVTs For the study’s statistical analyses, performance on cognitive validity tests was coded as the number of tests failed (0, 1, or 2). It is the standard operating procedure in the TBI Clinic-Neuropsychology Service at Dwight D. Eisenhower Medical Center to administer two or more cognitive PVT procedures – an approach that is consistent with recommendations posited by official position papers published by major neuropsychology professional organizations.41,42 Each of these PVTs is derived from different areas of cognition (i.e., memory and attention based). The likelihood of failing two or more PVTs is very low,43 therefore when an individual fails two or more they are deemed as having demonstrated non-credible task engagement/performance validity and the neuropsychological evaluation is discontinued. Variable effort is demonstrated when an examinee performs below an established cutoff score on just one PVT but otherwise passes the remainder of other indicators. When an examinee passes all PVTs administered to them their degree of task engagement/performance is judged credible. Reliable Digit Span Reliable Digit Span (RDS) is a cognitive performance validity test that originated from the Digit Span subtest of the Wechsler Adult Intelligence Scale-Revised.44 The RDS requires participants to repeat increasingly long strings of numbers. According to a meta-analysis by Schroeder and colleagues,41 the established cutoff score used in the current study demonstrates a mean sensitivity rate of 72% and mean specificity rate of 81% for neurocognitive malingering. Test of Memory Malingering The test of memory malingering (TOMM) is a 50-item visual recognition measure constructed to assist in discriminating between memory malingering and genuine memory impairments.42 The TOMM demonstrated good specificity and sensitivity (greater than 90%) within community, clinical, and “at-risk for malingering” individuals included in TOMM validation studies.42 The TOMM has good internal consistency reliability and convergent validity with the Forced Choice Recognition task.42,45 In the current study, utilized cutoff scores were as published in the TOMM manual and peer reviews. Emotional Response Bias Indicator(s)/SVT Personality assessment inventory Participants also completed the personality assessment inventory (PAI) as part of a battery of neuropsychological assessments during their evaluation. The PAI is a 344-item self-report inventory of psychopathological symptoms (e.g., depression, anxiety, and aggression) for adults.33 The PAI profile provides emotional response bias/validity indices including indicators of inconsistency, unusual symptom endorsement, and/or exaggeration along with a myriad of clinical scales, and subscales. Relative to large normative and clinical samples, T scores equal to or greater than 70 suggest significant symptoms, while scores between 60 and 69 reflect moderate elevations.33 Validity scales have different cutoff scores. The PAI validity and clinical scales demonstrate good test-retest reliability (i.e., 0.73–0.82), adequate internal consistency reliability (i.e., 0.70–0.80), with the majority of the PAI clinical scales demonstrating adequate construct validity36 and discriminatory and convergent validity.33 Study sample average and standard deviations for validity scales, clinical, and subscales are reported. RESULTS Demographic Characteristics See Tables I and II for pertinent demographic information. Average(SD) age of SMs was 36.0(9.5), ranging from 19–59, and 93% identified as male. In terms of ethnicity, 62% of patients self-identified as Non-Hispanic White, 22.5% as African American, and 15.5% as Hispanic or Latino. Average(SD) years of education was 13.3(2.4), ranging from 7 to 20. Ninety-four percent self-identified English as their primary language. Patients from Army, Navy, and Air Force across all components of service are represented in the study sample although the preponderance of data is from Army (97%) Active Duty (79%) SMs. The majority had deployed on at least one occasion [average number of deployments was 2.6(2.0); range 0–16]. Fifty-one percent of the concussed patients were involved in the MEB process at the time of evaluation after having been referred to neuropsychology service to help the Board ascertain whether they were fit or failed retention standards from a cognitive standpoint. There was no difference between MEB and routine clinical sample groups in terms of age, sex, ethnicity, education, primary English speaking, branch or component of service, time in service, or number of deployments. Table I. Patient Demographics for Total Study Sample   N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51            N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51          Table I. Patient Demographics for Total Study Sample   N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51            N  Frequency (%)  Mean  SD  Minimum  Maximum  Total sample size  71            Sex    93 male          Ethnicity  71             Non-Hispanic White    62.0           African American    22.5           Hispanic or Latino    15.5          Age  71    36.0  9.5  19  59  Education  71    13.3  2.4  7  20  Rank  71             ≤E3 (e.g., specialist or corporal)    4           ≥E4 (e.g., sergeant)    85           Officer    11          # of deployments  68    2.6  2.0  0  16  Branch               Army    97.2           Navy    1.4           Air Force    1.4          Component               Active duty    79           Reserves    3           National guard    13           Activated national guard    1           Activated reserves    3          External factors               MEB underway    51          Table II. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Validity Tests Failed Across Various Demographics   MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0      MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0    Table II. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Validity Tests Failed Across Various Demographics   MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0      MEB Group  Clinical Sample      # PVTs  # PVTs  Chi-Square    Failed  Failed  χ2  Demographics  %  0  1  2  %  0  1  2  p-Value  Sex                  0.924   Male  91  19  6  4  95  23  9  5     Female  9  1  1  1  5  2  0  0    Ethnicity                  0.556   Non-Hispanic White  69  14  6  2  56  16  4  2     African American  16  3  1  1  28  5  3  3     Hispanic or Latino  16  3  0  2  15  4  2  0    Education                  0.076   <12  60  12  3  3  38  12  3  0     13–16  37  8  3  0  54  10  6  5     >16  3  0  0  1  8  3  0  0    English primary                  0.808   Yes  97  19  6  4  92  21  7  5     No  3  0  0  1  8  2  1  0    Branch                  0.400   Army  97  19  7  5  97  25  8  5     Navy  3  1  0  0    0  0  0     Air Force    0  0  0  3  0  1  0    Component                  0.258   Active duty  77  16  7  1  82  20  7  5     Reserves    0  0  0  5  2  0  0     National guard  19  3  0  3  8  2  1  0     Activated national guard    0  0  0  3  1  0  0     Activated reserves  3  1  0  0  3  0  1  0    # Deployments                  0.591   0  10  1  0  2  5  1  1  0     1  19  5  0  1  16  4  0  2     2  32  6  3  1  32  9  1  2     3  10  2  1  0  34  7  5  1     4  26  4  2  2  11  3  1  0     ≥5  3  1  0  0  3  1  0  0    Study Objective 1: Examining Base Rates of Noncredible PVTs in Clinical Versus MEB Patients As shown in Table III, 35.9% of routine clinical patients undergoing neuropsychological examination demonstrated failure on one or more PVT indicators and 12.8% failed 2 while 37.5% of MEB patients failed 1 and 15.6% failed 2 PVT indicators. Table III. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Performance Validity Tests During Cognitive Testing Failed # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    Table III. Frequencies of Participants Involved with MEB vs Clinical Sample and the Number of Performance Validity Tests During Cognitive Testing Failed # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    # of PVTs Failed  MEB  Clinical  Pearson    Group  Sample  Chi-Square  0  20  25    1  7  9    2  5  5          χ2 = 0.117, p = 0.943  % Failed 1 or more PVTs:  37.5%  35.9%    % Failed 2 PVTs:  15.6%  12.8%    Study Objective 2: Relationship Between MEB Status and Relative Risk of Failure on One or More PVTs and Symptom Exaggeration As depicted in Table III, base rates of failures on one or more PVT did not differ between routine clinical versus MEB patients (p = 0.94). MEB involvement also was not associated with increased odds of demonstrating emotional symptom response bias, as measured by PAI validity scales, as compared with routine clinical patients (See Table IV; all p’s ≥ 0.08). Table IV. Coefficients of MEB Status Predicting Whether a Participant Had an Elevated PAI Score (INC > 72; INF > 74; NIM > 70; PIM > 67; DEF > 70; CDF > 70; MAL > 83, RDF > 60)   b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86    b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86  aJust 2/71 participants had PIM > 67. Unable to calculate meaningful odds ratio. bNo participants had elevated DEF. Table IV. Coefficients of MEB Status Predicting Whether a Participant Had an Elevated PAI Score (INC > 72; INF > 74; NIM > 70; PIM > 67; DEF > 70; CDF > 70; MAL > 83, RDF > 60)   b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86    b  Odds Ratio  p-Value  INC  −17.87  0.00  0.99  INF  0.44  1.56  0.72  NIM  0.68  1.97  0.22  aPIM  *  *  *  bDEF  *  *  *  CDF  1.22  3.39  0.29  MAL  1.52  4.57  0.08  RDF  0.11  1.12  0.86  aJust 2/71 participants had PIM > 67. Unable to calculate meaningful odds ratio. bNo participants had elevated DEF. Study Objective 3: Emotional Symptom Correlates of PVTs Performance See Table V for correlation matrix for PAI clinical scales/subscales and PVT performances. Across the total sample, increasing evidence of PVT failures was positively correlated with increased preoccupation with health and physical functioning [i.e., PAI Somatic Complaints (SOM) scale; r = 0.42, p < 0.001], generalized tension and anxiety [i.e., PAI Anxiety (ANX) scale; r = 0.55, p < 0.001], anxiety-related disorders [i.e., PAI Anxiety Related Disorders (ARD) scale; r = 0.45, p < 0.001], unipolar depressive symptoms [i.e., PAI Depression (DEP) clinical scale; r = 0.42, p < 0.001], suspicious and hostile presentation [i.e., PAI Paranoia (PAR) clinical scale; r = 0.39, p < 0.01], endorsement of atypical perceptions/alienation/subjective cognitive difficulties [i.e., PAI Schizophrenia (SCZ) scale; r = 0.44, p < 0.001], borderline personality traits/features [i.e., PAI Borderline (BOR) scale; r = 0.32, p < 0.01], along with a penchant for aggression [i.e., PAI Aggression (AGG) scale; r = 0.29, p < 0.05]. In contrast, there were no significant association between increasing PVT failures and hypomania/mania, antisocial traits, alcohol or other drug problems, and suicidal ideation [i.e., PAI Mania (MAN), Antisocial (ANT), Alcohol (ALC), Drug Problems (DRG), and Suicide (SUI) clinical scales; all p’s > 0.05]. Table V. Pearson Correlations Between Number of PVTs Failed and PAI Scale; and Independent Sample t-Tests Between MEB vs. PAI Scales and Subscales   PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)      PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)    * p < 0.05; **p < 0.01; ***p < 0.001. Table V. Pearson Correlations Between Number of PVTs Failed and PAI Scale; and Independent Sample t-Tests Between MEB vs. PAI Scales and Subscales   PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)      PVT  MEB  Clinical  t-Test  PAI Scale  Failures r  Means (SD)  Means (SD)  p-Value  INC  0.12  49.52 (7.78)  52.24 (7.30)    INF  0.17  50.48 (9.15)  52.71 (9.66)    NIM  0.35**  67.34 (20.61)  58.24 (10.19)  *  PIM  −0.22  43.86 (10.04)  47.29 (10.91)    DI  −0.23  46.14 (10.85)  46.51 (9.57)    CDF  0.14  55.48 (8.75)  58.11 (10.46)    MAL  0.38**  62.38 (15.83)  56.24 (12.81)    RDF  0.35**  49.90 (12.04)  51.22 (11.09)    SOM  0.42***  74.10 (12.71)  64.03 (12.37)  **  ANX  0.55***  69.72 (15.84)  61.18 (12.62)  *  ARD  0.45***  69.31 (16.70)  62.66 (14.12)    DEP  0.50***  73.07 (18.25)  65.82 (13.99)    MAN  0.21  56.14 (9.75)  54.97 (9.29)    PAR  0.39**  65.76 (14.93)  62.13 (11.60)    SCZ  0.44***  68.55 (17.69)  64.61 (13.51)    BOR  0.32**  61.10 (12.61)  58.84 (11.50)    ANT  −0.13  55.55 (13.46)  54.39 (11.23)    ALC  −0.05  51.07 (12.45)  50.89 (8.60)    DRG*  0.06  46.90 (5.23)  50.21 (6.70)  *  AGG  0.29*  61.86 (15.89)  60.26 (14.71)    * p < 0.05; **p < 0.01; ***p < 0.001. Routine clinical and MEB patients differed across their degree of symptom endorsement involving somatic focus (i.e., PAI Somatic scale; MEB patients > routine clinical), anxiety (i.e., PAI ANX clinical scale; MEB patients > routine clinical), and drug-related problems (i.e., PAI DRG clinical scale; Routine clinical > MEB) but not across other omnibus clinical scales (see Table V). Study Objective 4: Moderational Role of MEB Status on Failure of PVT and Symptom Exaggeration Across the total sample, as displayed in Table V, an increasing number of PVT failures was moderately and positively correlated with increased symptom over-endorsement/exaggeration on the PAI [i.e., Negative Impression Management (NIM) scale, Malingering Supplemental Index (MAL), Rogers Discrimination Supplemental Index (RDF); r’s ranged from = 0.35–38; all p’s ≤ 0.01] but not indicators of inconsistency, low frequency symptom endorsement, or defensiveness [i.e., PAI Inconsistency (INC), Infrequency (INF), Positive Impression Management (PIM) scales, also Defensiveness Supplemental Index (DI), Cashel Discriminant Function (CDF) Supplemental Index; all p’s < 0.05]. Routine clinical patients did demonstrate less bizarre symptom over-endorsement (i.e., NIM) than MEB patients [i.e., T-score mean(SD) were 58.2(10.2) versus 67.3(20.6); p ≤ 0.05], however, no differences between routine clinical and MEB patients were found across other PAI indicators of response bias. Furthermore, the MEB NIM mean was below the cutoff of 70 indicating, sub-threshold negative impression management. To determine if MEB status (involved in or not) moderated the relationship between PVT and symptom exaggeration indicators, a series of regression analyses were performed. MEB status did not moderate the relationship between any of the validity scales (i.e., INC, INF, NIM, PIM, DI, CDF, MAL, or RDF scales on PAI) and PVT (all p’s > 0.05). In other words, the relationship between PVT and these PAI symptom exaggeration scales did not depend on MEB involvement. DISCUSSION The current study had several aims. First, we sought to examine the base rates of noncredible task engagement/poor effort rates in SMs using two indicators of PVT. Second, we wanted to determine if involvement in MEB elevated likelihood of noncredible task engagement and psychological symptom exaggeration. Third, we worked to elucidate the relationships between psychological symptom endorsement and noncredible PVT performances. And finally, we ascertained whether or not MEB moderates the relationship of PVT failures and psychological symptom exaggeration. Variable or poor task engagement was common across both routine clinical and MEB patients. Indeed, a little over a third of our routine clinical and MEB patients demonstrated what could be considered variable task engagement (as defined as failure on one PVT). Approximately one in six SMs evaluated in our DOD TBI Clinic during a 12-mo period demonstrated noncredible task engagement (i.e., failed two PVTs), rendering their neurocognitive data invalid. Perhaps surprisingly, MEB involvement did not increase the likelihood of psychological response bias. However, increasing numbers of PVT failures were in fact moderately correlated with selected response bias indicators suggestive of increased symptom exaggeration. Moderate positive associations between PVT performance and several PAI clinical scales were demonstrated. Performance validity test performance was associated with PAI clinical scales measuring somatic focused concerns, generalized tension and anxiety, anxiety-related disorders, depressotypic symptoms, suspiciousness/hostility, subjective cognitive difficulties and atypical perceptions, borderline personality features, along with a penchant for aggression. MEB patients did tend to endorse greater somatic focus and anxiety than did routine clinical patients. In our sample, involvement in MEB did not moderate the relationships between PVT performance and psychological symptom exaggeration. Our study contributes to the growing body of neurocognitive validity research among military and veteran populations. Our results are broadly consistent with prior published studies that have previously documented a range of base rates (low to moderately high) of poor task engagement during neuropsychological evaluations in military and veteran settings.10,19,20,23–25,28,29,46 Our results stand in contrast to prior studies that have suggested involvement in MEB confers increased likelihood of poor PVT performance.19,20,23,24 Consistent with prior studies,30 our results do highlight that the same SMs who fail PVTs also tend to endorse a myriad of psychiatric symptoms (e.g., somatic focus, anxiety, depressotypic, subjective cognitive difficulties) and proclivities (e.g., hostility, borderline personality traits, aggressive). This study also is the first to show that MEB involvement did not strengthen the association between PVT performances and evidence of psychological symptom response bias. Study Limitations A number of limitations of the current study should be taken into account when considering the present findings. First, the demographic makeup of the sample is not representative of the general population of the military. According to Defense Manpower Research,47 active duty U.S. military has a larger Non-Hispanic White population (75% vs. 69% in our sample), as well as a much larger proportion of females (29% vs. 9% in our sample) compared with the current sample. Thus, results of the study may not apply to female SMs given the small number of female participants. Having said that, males are more common in combat military occupational specialties and concussion is a common occupational hazard during deployment (recall, the majority of our study sample had deployed more than once). Also, due to the nature of the study and its use of a pre-existing data set that was comprised of consecutively referred patients, the authors had no control over the variability of demographic characteristics in the final sample. The final sample size was affected by exclusion of a number of participants who did not meet inclusion criteria for the study. Second, another limitation to the current study is that severity of TBI was made by clinician judgment instead of reliance on research-based structured interview such as the Ohio State TBI Identification Method.48 In that end, the severity of remote history of TBI was graded by a board certified neuropsychologist with >9 yr of post-doctoral experience working with military concussion patients. Traumatic brain injury severity was judged after reviewing available medical documentation, conducting a diagnostic clinical interview where alleged civilian and military related concussion history(ies) and peri-event sequelae were explored, and subsequent staging of presumed TBIs in accordance with VA/DoD TBI Clinical Practice Guidelines.39,40 Third, in hindsight, the present study findings would likely have been enhanced had we been in a position to clearly ascertain SM career intention data such as self-reported career goals, motivation to continue, and prediction of MEB or clinical outcomes. Such data could have helped elucidate implicit/explicit motives rather than relying solely on psychometric evidence to pinpoint cases of probable poor effort/faking bad. Having said that, other studies have conceptually29 and methodologically10 adopted a similar interpretive stance as the one employed in the current study.10,19,20,23,24 Military Relevance and Conclusion Implications of variable or poor task engagement during routine clinical and MEB neuropsychological evaluation in military settings on treatment and disposition planning cannot be overstated. While SMs cannot reliably perform better on neuropsychological/cognitive testing than they truly are capable of (unless they are cheating or have been coached on how to fake good), they can surely perform below expectations on objective performance-based testing as a result of bona fide cognitive problems and/or variable motivation and effort, or deliberate faking bad. Said simply, in the presence of noncredible cognitive findings and failure to objectively corroborate subjective cognitive complaints, it is hard to ethically justify extensive cognitive rehabilitation, initiating a trial of cognitively enhancing pharmacotherapeutics, offering material resources, and/or assigning a disability rating and awarding compensation for subjective cognitive dysfunction. Another important conclusion from the present study is that SM involvement in MEB proceedings, a scenario that is arguably analogous to both civilian and veteran disability and compensation seeking, did not further elevate the likelihood of demonstrating noncredible task engagement. When considered within the setting of other contradictory studies conducted to date, involvement in MEB per se may not necessarily elevate the likelihood of neurocognitive malingering over that which is seen in routine clinical practice. Instead, our results indicate that it is the SMs who are endorsing a myriad of subjective difficulties involving their physical health and functioning, anxiety, and/or depression are commonly the ones who go on to demonstrate noncredible engagement in the testing process. At our DoD TBI Clinic, our staff tends to give SMs the benefit of the doubt when noncredible cognitive findings are demonstrated. That is, instead of defaulting to a diagnosis of neurocognitive malingering, consistent with other research,9 our providers re-focus their clinical attention to addressing the complicating clinical factors that can either account for subjective cognitive difficulties and/or represent barriers to sitting and participating in a neurocognitive evaluation – such as chronic pain, sleep disturbance, and severe psychopathology. This later approach is not novel, as others have highlighted that severe psychiatric comorbidities may be a common cause of variable or poor effort during neuropsychological evaluation.29 Future studies should endeavor to replicate our findings in a more demographically diverse military sample with inclusion of more formalized TBI grading systematic approaches and assessment of subjective self-report career intention data. Supplementary Material Supplementary material is available at Military Medicine online. Acknowledgments The authors are grateful for the constructive criticisms, suggested edits, and thought provoking comments from the Military Medicine Editorial Staff and peer Reviewers. References 1 Committee on the Assessment of the Readjustment Needs of Military Personnel, Veterans, and Their Families; Board on the Health of Select Populations; Institute of Medicine: Returning Home from Iraq and Afghanistan: Assessment of Readjustment Needs of Veterans, Service Members, and Their Families . Washington (DC), National Academies Press (US), 2013. 2 Hoge CW, McGurk D, Thomas JL, Cox AL, Engel CC, Castro CA: Mild traumatic brain injury in U.S. Soldiers returning from Iraq. N Engl J Med  2008; 358: 453– 63. Google Scholar CrossRef Search ADS PubMed  3 Terrio H, Brenner LA, Ivins BJ, Cho JM, Helmick K, Schwab K, Scally K, Bretthauer R, Warden D: Traumatic brain injury screening: preliminary findings in a US Army brigade combat team. J Head Trauma Rehabil  2009; 24: 14– 23. Google Scholar CrossRef Search ADS PubMed  4 Wilk JE, Thomas JL, McGurk DM, Riviere LA, Castro CA, Hoge CW: Mild traumatic brain injury (concussion) during combat: lack of association of blast mechanisms with persistent post concussive symptoms. J Head Trauma Rehabil  2010; 25: 9– 14. Google Scholar CrossRef Search ADS PubMed  5 Wilk JE, Herrell RK, Wynn GH, Riviere LA, Hoge CW: Mild traumatic brain injury (concussion), posttraumatic stress disorder, and depression in U.S. soldiers involved in combat deployments: association with post deployment symptoms. Psychosom Med  2012; 74: 249– 57. Google Scholar CrossRef Search ADS PubMed  6 Lange R, Brickell TA, French LM, et al.  : Neuropsychological outcomes from uncomplicated mild, mild, and moderate traumatic brain injury in U.S. military. Arch Clin Neuropsychol  2012; 27: 480– 94. Google Scholar CrossRef Search ADS PubMed  7 Hoge CW, Goldberg HM, Castro CA: Care of war veterans with mild traumatic brain injury – flawed perspectives. N Engl J Med  2009; 360: 1588– 91. Google Scholar CrossRef Search ADS PubMed  8 McCrea M, Pliskin N, Barth J, et al.  : Official position of the military TBI task force on the role of neuropsychology and rehabilitation psychology in the evaluation, management, and research of military veterans with traumatic brain injury. Clin Neuropsychol  2008; 22: 10– 26. Google Scholar CrossRef Search ADS PubMed  9 Jones A, Ingram MV, Ben-Porath YS: Scores on the MMPI-2-RF scales as a function of increasing levels of failure on cognitive symptom validity tests in a military sample. Clin Neuropsychol  2012; 26: 790– 815. Google Scholar CrossRef Search ADS PubMed  10 Jones A: Test of memory malingering: cutoff scores for psychometrically defined malingering groups in a military sample. Clin Neuropsychol  2013; 27: 1043– 1059. Google Scholar CrossRef Search ADS PubMed  11 Tekin L, Tuncer SK, Akarsu S, Eroglu M: A young military recruit with unilateral atypical swelling of his left arm: malingering revisited. J Emerg Med  2013; 45: 714– 717. Google Scholar CrossRef Search ADS PubMed  12 Schnellbacher S, O’Mara H: Identifying and managing malingering and factitious disorder in the military. Curr Psychiatry Rep  2016; 18: 105– 112. Google Scholar CrossRef Search ADS PubMed  13 Lippa SM, Lange RT, French LM, Iverson GL: Performance validity, neurocognitive disorder, and post-concussion symptom reporting in service members with a history of mild traumatic brain injury. Arch Clin Neuropsychol  2017; 21: 1– 13. Google Scholar CrossRef Search ADS   14 American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders , 5th ed., Arlington, VA, American Psychiatric Publishing, 2013. 15 Armed Forces Health Surveillance Center: Malingering and factitious disorders and illness, active component, U.S. armed forces, 1998–2012. MSMR  2013; 20: 20– 24. 16 Lande G, Williams LB: Prevalence and characteristics of military malingering. Mil Med  2013; 178: 50– 4. Google Scholar CrossRef Search ADS PubMed  17 Iancu I, Ben-Yehuda Y, Yazvitzky R, Rosen Y, Knobler HY: Attitudes toward malingering: a study among general practitioners and mental health officers in the military. Med Law  2003; 22: 373– 389. Google Scholar PubMed  18 Kokcu AT, Kurt E: General practitioners’ approach to malingering in basic military training centers. J R Army Med Corps  2017; 163: 119– 123. Google Scholar CrossRef Search ADS PubMed  19 Armistead-Jehle P, Buican B: Evaluation context and symptom validity test performances in a U.S. military sample. Arch Clin Neuropsychol  2012; 27: 828– 839. Google Scholar CrossRef Search ADS PubMed  20 Armistead-Jehle P, Hansen CL: Comparison of the repeatable battery for the assessment of neuropsychological status effort index and stand-alone symptom validity tests in a military sample. Arch Clin Neuropsychol  2011; 26: 592– 601. Google Scholar CrossRef Search ADS PubMed  21 Cooper DB, Vanderploeg RD, Armistead-Jehle P, Lewis JD, Bowles AO: Factors associated with neurocognitive performance in OIF/OEF service members with postconcussive complaints in postdeployment clinical settings. J Rehabil Res Dev  2014; 51: 1023– 1034. Google Scholar CrossRef Search ADS PubMed  22 Grills CE, Armistead-Jehle P: Performance validity test and neuropsychological assessment battery screening module performances in an active-duty sample with a history of concussion. Appl Neuropsychol Adult  2016; 23: 295– 301. Google Scholar CrossRef Search ADS PubMed  23 Lange RT, Pancholi S, Bhagwat A, Anderson-Barnes V, French LM: Influence of poor effort on neuropsychological test performance in U.S. military personnel following mild traumatic brain injury. J Clin Exp Neuropsychol  2012; 34: 453– 466. Google Scholar CrossRef Search ADS PubMed  24 Armistead-Jehle P: Symptom validity test performance in US veterans referred for evaluation of mild TBI. Appl Neuropsychol  2010; 17: 52– 9. Google Scholar CrossRef Search ADS PubMed  25 Jurick SM, Twamley EW, Crocker LD, Hays CC, Orff HJ, Golshan S, Jak AJ: Postconcussive symptom over reporting in Iraq/Afghanistan Veterans with mild traumatic brain injury. J Rehabil Res Dev  2016; 53: 571– 584. Google Scholar CrossRef Search ADS PubMed  26 Nelson NW, Hoelzle JB, McGuiire KA, Ferrier-Auerback AG, Charlesworth MJ, Sponheim SR: Evaluation context impacts neuropsychological performance of OEF/OIF veterans with reported combat-related concussion. Arch Clin Neuropsychol  2010; 25: 713– 723. Google Scholar CrossRef Search ADS PubMed  27 Whitney KA, Davis JJ, Shepard PH, Herman SM: Utility of the Response Bias Scale (RBS) and other MMPI-2 validity scales in predicting TOMM performance. Arch Clin Neuropsychol  2008; 23: 777– 786. Google Scholar CrossRef Search ADS PubMed  28 Young JC, Kerns LA, Roper BL: Validation of the MMPI-2 Response Bias Scale and Henry-Heilbronner Index in a U.S. veteran population. Arch Clin Neuropsychol  2011; 26: 194– 204. Google Scholar CrossRef Search ADS PubMed  29 Slick DJ, Sherman EM, Iverson GL: Diagnostic criteria for malingered neurocognitive dysfunction: proposed standards for clinical practice and research. Clin Neuropsychol  1999; 13: 545– 561. Google Scholar CrossRef Search ADS PubMed  30 Marx BP, Miller MW, Sloan DM, Litz BT, Kalupek DG, Keane TM: Military-related PTSD, current disability policies, and malingering. Am J Public Health  2008; 98: 773– 4. Google Scholar CrossRef Search ADS PubMed  31 Willis PF, Farrer TJ, Bigler ED: Are effort measures sensitive to cognitive impairment? Mil Med  2011; 175: 1426– 31. Google Scholar CrossRef Search ADS   32 Armistead-Jehle P, Cooper DB, Vanderploeg RD: The role of performance validity tests in the assessment of cognitive functioning after military concussion: a replication and extension. Appl Neuropsychol Adult  2016; 23: 264– 273. Google Scholar CrossRef Search ADS PubMed  33 Morey LC: Personality Assessment Inventory . Odessa, FL, Psychological Assessment Resources, 1991. 34 Ben-Porath YS, Tellegen A: MMPI-2-RF (Minnesota Multiphasic Personality Inventory-2-Restructured Form) Manual for Administration, Scoring, and Interpretation . Minneapolis, MN, University of Minnesota Press, 2008. 35 Wygant DB, Sellbom M, Ben-Porath YS, Stafford KP, Freeman DB, Heilbronner RL: The relation between symptoms validity testing and MMPI-2 scores as a function of forensic evaluation context. Arch Clin Neuropsychol  2007; 22: 489– 499. Google Scholar CrossRef Search ADS PubMed  36 Haggerty KA, Frazier TW, Busch RM, Naugle RI: Relationships among Victoria symptom validity test indices and personality assessment inventory validity scales in a large clinical sample. Clin Neuropsychol  2007; 21: 917– 928. Google Scholar CrossRef Search ADS PubMed  37 Demakis GJ, Gervais RO, Rohling ML: The effect of failure on cognitive and psychological symptoms validity tests in litigants with symptoms of post-traumatic stress disorder. Clin Neuropsychol  2008; 22: 879– 895. Google Scholar CrossRef Search ADS PubMed  38 Sumanti M, Boone KB, Savodnik I, Gorsuch R: Noncredible psychiatric and cognitive symptoms in workers’ compensation ‘stress’ claim sample. Clin Neuropsychol  2006; 22: 1080– 1092. 39 VA/DoD clinical practice guideline for management of concussion/mild traumatic brain injury (mTBI). 2009. Retrieved July 17, 2016, Available at http://www.healthquality.va.gov/guidelines/Rehab/mtbi/concussion_mtbi_full_1_0.pdf. 40 VA/DoD clinical practice guideline for management of concussion-mild traumatic brain injury. 2016. Retrieved July 17, 2016, Available at http://www.healthquality.va.gov/guidelines/Rehab/mtbi/mTBICPGFullCPG50821816.pdf. 41 Schroeder RW, Twumasi-Ankrah P, Baade LE, Marshall PS: Reliable Digit Span: a systematic review and cross-validation study. Assessment  2012; 19: 21– 30. Google Scholar CrossRef Search ADS PubMed  42 Tombaugh TN: Test of Memory and Malingering . North Tonawanda, NY, Multi-Health Systems, 1996. 43 Larabee GJ: Aggregation across multiple indicators improves the detection of malingering: relationship to likelihood ratios. Clin Neuropsychol  2008; 22: 666– 679. Google Scholar CrossRef Search ADS PubMed  44 Wechsler D: WAIS-R: Wechsler Adult Intelligence Scale-Revised . New York, N.Y., Psychological Corporation, 1981. 45 Moore BA, Donders J: Predictors of invalid neuropsychological test performance after traumatic brain injury. Brain Inj  2004; 18: 975– 984. Google Scholar CrossRef Search ADS PubMed  46 Whitney KA, Shepard PH, Williams AL, Davis JJ, Adams KM: The Medical Symptom Validity Test in the evaluation of Operation Iraqi Freedom/Operation Enduring Freedom soldiers: a preliminary study. Arch Clin Neuropsychol  2009; 24: 145– 152. Google Scholar CrossRef Search ADS PubMed  47 Defense Manpower Research. Demographics of active duty U.S. military. 2013 November 23. Available at http://www.statisticbrain.com/demographics-of-active-duty-u-s-military/. 48 Corrigan JD, Bogner J: Initial reliability and validity of the Ohio State University TBI identification method. J Head Trauma Rehabil  2007; 22: 318– 329. Google Scholar CrossRef Search ADS PubMed  49 Heilbronner RL, Sweet JJ, Morgan JE, Larrabee G, Millis SR, et al.  : Conference Participants: American Academy of Clinical Neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering. Clin Neuropsychol  2009; 23: 1093– 1129. Google Scholar CrossRef Search ADS PubMed  50 Slavin-Mulford J, Sinclair SJ, Stein M, Malone J, Bello I, Blais MA: External validity of the Personality Assessment Inventory (PAI) in a clinical sample. J Pers Assess  2012; 94: 593– 600. Google Scholar CrossRef Search ADS PubMed  51 McDermott BE: Psychological testing and the assessment of malingering. Psychiatr Clin N Am  2012; 35: 855– 876. Google Scholar CrossRef Search ADS   52 Schnellbacher S, O’Mara H: Identifying and managing malingering and factitious disorder in the military. Curr Psychiatry Rep  2016; 18: 105. Google Scholar CrossRef Search ADS PubMed  53 Connor H: The use of anesthesia to diagnose malingering in the 19th century. J R Soc Med  2006; 99: 444– 447. Google Scholar CrossRef Search ADS PubMed  54 Anderson DL, Anderson GT: Nostalgia and malingering in the military during the Civil War. Perspect Biol Med  1984; 28: 156– 166. Google Scholar CrossRef Search ADS PubMed  55 Article 115. Malingering, US Code Service. Uniform Code of Military Justice. 56 Resnick PJ: Defrocking the fraud: the detection of malingering. Isr J Psychiatry Relat Sci  1993; 30: 93– 101. Google Scholar PubMed  57 Bass C, Halligan P: Factitious disorders and malingering: challenges for clinical assessment and management. Lancet  2014; 383: 1422– 1432. Google Scholar CrossRef Search ADS PubMed  58 Rogers R, Kropp PR, Bagby RM, Dickens SE: Faking specific disorders: a study of the structured interview of reported symptoms. J Clin Psychol  1992; 48: 653– 648. 59 Guriel J, Fremouw W: Assessing malingered posttraumatic stress disorder: a critical review. Clin Psychol Rev  2003; 23: 881– 904. Google Scholar CrossRef Search ADS PubMed  Author notes The views expressed are solely those of the authors and do not reflect the official policy or position of the US Army, the Department of Defense, US Government, Dwight D. Eisenhower Army Medical Center TBI Clinic / Neuroscience & Rehabilitation Center, or University of South Carolina – Aiken. © Association of Military Surgeons of the United States 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

Military MedicineOxford University Press

Published: Mar 26, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off