The Policy Statement of the Houston Conference on Specialty Education and Training in Clinical Neuropsychology states that “the training of the specialist in clinical neuropsychology must be scientist-practitioner based” (Hannay et al., 1998; p. 160). In other words, all neuropsychologists – in clinics, academia, and elsewhere – are expected to have been trained to employ an evidence-based approach in their work. As such, this volume, edited by Stephen C. Bowden, has considerable value for those seeking to understand the key elements of evidence-based practice (both assessment and intervention) in clinical neuropsychology, by providing readers with clear guidelines for incorporating modern scientific practice into their work. Any book starting with the words “Paul Meehl” is bound to seduce a certain type of psychologist, myself included. And in the spirit of Meehl, Bowden’s introductory chapter makes it very clear that strong research evidence – not clinical judgment – is the preferred data source to guide evidence-based practice in neuropsychology. In Chapter 1, Bowden uses the excellent example from his own work on the long-term stability of amnesia in Korsakoff’s syndrome to illustrate how clinical lore can easily gain a foothold in the minds of well-meaning neuropsychologists despite contradictory evidence. Such an example sets the stage for two important themes that are pursued throughout the book: research design (how to identify high quality research) and rules of evidence (how to interpret data in a way that optimizes patient care). This book is a must-read for all neuropsychologists and neuropsychologists-in-training. In fact, the chapter by Bowden and Finch (Chapter 5: “When is a Test Reliable Enough and Why Does it Matter?”) is perhaps the best clinically-focused exposition on psychometric theory I have ever read, and – simply on its own – is worth the “price of admission,” so to speak. Although every neuropsychologist is expected to receive adequate training in psychometrics, it is unfortunately all too common to see instances where clinicians or researchers fail to fully understand or utilize these important statistical and conceptual tools in their work. Bowden and Finch so clearly articulate the importance of reliability and the steps needed to derive interval estimates from neuropsychological test scores, that there is no good excuse for any neuropsychologist failing to heed their advice after reading this chapter. Building on the emphasis Bowden and Finch place on the value of reliability, its role in determining measurement precision, and the importance of making interval – as opposed to only point – estimates, Hinton-Bayre and Kwapil’s Chapter 6 (“Best Practice Approaches for Evaluating Significant Change for Individuals”) discusses the importance of making interval estimates when interpreting changes in test scores across serial assessments. The statistics underlying the construction of reliable change indices (RCIs) have progressed rapidly over the last several decades, and Hinton-Bayre and Kwapil do an excellent job conveying the importance of RCIs, describing the psychometric details of several of the most useful RCI methods, and walking the reader through the steps needed to both understand and generate RCIs for their own assessment data. This book is strong in other areas as well. For instance, in Chapter 3 (“Construct Validity has a Critical Role in Evidence-Based Neuropsychological Assessment”), Jewsbury and Bowden build off of the preceding chapter and discuss three competing theories underlying the science of cognitive assessment and persuasively describe the abundance of evidence in favor of the Cattell–Horn–Carroll (CHC) model of cognitive functioning. Importantly, their discussion of the CHC model is not purely theoretical; the chapter also describes existing test batteries (e.g., Woodcock-Johnson) that conform to the CHC model and also present evidence to suggest that factor analytic studies of most comprehensive neuropsychological test batteries generate data that can be modeled well using the CHC theory’s broad constructs. Later in the book, Chapters 7, 9, 10, and 11 work well together in their provision of knowledge and specific tools for the critical evaluation of published research in neuropsychology. Chelune’s Chapter 7 (“Evidence-Based Practices in Neuropsychology”) provides a general overview of evidence-based practice and some of the most important principles, skills, and statistical tools available to neuropsychologists. Chapter 9 (“Use of Reporting Guidelines for Research in Clinical Neuropsychology: Expanding Evidence-Based Practice Through Enhanced Transparency”), authored by Schoenberg, Osborn, and Soble, at first blush may appear to be targeted toward researchers due to its emphasis on publishing guidelines implemented by numerous scientific journals. However, the authors clearly convey how these publishing guidelines (which are used to ensure proper scientific reporting of randomized trials, observational studies, diagnostic accuracy studies, systematic reviews and meta-analyses, and instrument development and validation studies) are also essential tools for non-researchers who aspire to be informed consumers of the scientific literature. Building nicely upon Chapters 7 and 9, Bunnage’s Chapter 10 (“How do I know when a Diagnostic Test Works?”) goes into even more detail about diagnostic accuracy studies and how to calculate and interpret the wide array of statistics that can be generated from this research design. This chapter includes, but is not limited to, a thorough discussion of the crucial sensitivity and specificity statistics, with an essential emphasis of how these values must be interpreted in the context of base rates. Next, Chapter 11 (“Applying Diagnostic Standards to Performance Validity Tests and the Individual Case”) by Berry, Harp, Koehl, and Combs, provides an even more applied emphasis on diagnostic accuracy studies, geared toward helping the reader understand and critically evaluate the performance validity literature in the context of the Standards for the Reporting of Diagnostic Accuracy Studies (STARD) criteria. Readers will also appreciate Chapter 12 (“Critically Appraised Topics for Intervention Studies: Practical Implementation Methods for the Clinician-Scientist”), in which Miller thoroughly describes critical appraisal and critically appraised topics (CATs). Whereas most of the book focuses on assessment, Miller’s contribution is welcome for several reasons. The focus on using an evidence-based approach to interventions serves as a reminder that clinical neuropsychology is not simply an academic pursuit, but that it is conducted with the goal of enhancing patient care. Further, evidence-based practice does not stop after the assessment data have been collected and scored; clinical neuropsychologists should also have the skills and knowledge to integrate their assessment data with the scientific literature to make patient-centered recommendations that are relevant, targeted, and supported by high quality evidence. This chapter provides a wealth of background information that will undoubtedly be valuable to neuropsychologists unfamiliar with CATs, and provides clear and concise strategies for using critical appraisal to the benefit of individual patients. Neuropsychologists who understand evidence-based principles will also understand the pervasiveness of variability in any intellectual endeavor. Although each chapter of this book provided a valuable contribution, I found some weaker than others. For instance, Chapter 2 (“Theory as Evidence: Criterion Validity in Neuropsychological Testing” by Riley, Combs, Davis, & Smith) is a somewhat unfocused take on construct validity and its application to the theories underlying test development. The section on “off-label” use of neuropsychological tests was appreciated, although an opportunity was missed to discuss how measurement invariance testing is the preferred scientific approach to evaluating a test’s validity in diverse samples. This chapter is perhaps best viewed as a useful segue to Chapter 3 (which focuses on construct validity) than an authoritative resource for all issues related to criterion validity. I found parts of Chapter 4 (“Contemporary Psychopathology Assessment: Mapping Major Personality Inventories onto Empirical Models of Psychopathology”) to be quite illuminating in many ways, especially the first half of the chapter focusing on what the authors (Lee, Sellbom, & Hopwood) call the multivariate correlated liabilities (MCLM) model of psychopathology. However, the second half of the chapter, which sought to describe how scores derived from the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) and the Personality Assessment Inventory (PAI) could be interpreted from the perspective of the MCLM model, was less effective. Finally, Chapter 8 (“Evidence-Based Integration of Clinical Neuroimaging Findings in Neuropsychology”) by Bigler felt somewhat out of place in the current volume. Although this chapter provided a good overview of neuroimaging in general, it appeared to fall somewhat short of its specific goal of discussing evidence-based integration of neuroimaging with neuropsychological assessment data. Given the book’s emphasis on psychometrics such as reliability and diagnostic accuracy, I would have welcomed a section of this chapter that focused on the “psychometric” properties of neuroimaging data. For example, how reliable are volumetric measurements of the hippocampus when measured longitudinally? How sensitive and specific are neuroimaging findings when compared to some criterion standard for the absence or presence of a disease state? What is the incremental validity of neuroimaging data in the context of a known neuropsychological profile, or vice versa? Nevertheless, readers unfamiliar with neuroimaging will likely find this chapter quite educational, if not entirely cohesive with the rest of the book. Some of the book’s most prominent strengths actually highlight some of its weaknesses. As described above, Chapter 5 persuasively illustrates the value of interval estimation, and the subsequent chapter on reliable change provides a strong, practical illustration of why reliability and measurement precision are important and useful in neuropsychology. In contrast, some of the other chapters that also focused on psychometrics, especially diagnostic accuracy (e.g., Chapters 7, 10, and 11), provided only point estimates, not interval estimates. Obviously, statistics such as sensitivity, specificity, positive and negative predictive values, likelihood ratios, and so forth, are also susceptible to measurement error that varies according to a test’s reliability and criterion validity. Therefore, an area for potential improvement in future editions of this book would be to incorporate the emphasis on interval estimation into all psychometrically-focused chapters. As scientific evidence continues to accumulate, neuropsychologists must continually update their evidence-based skill set. Therefore, this book will undoubtedly be a valuable resource for anyone interested in evidence-based practice, ranging from first year graduate students to the most experienced among us. As a graduate educator in a clinical psychology PhD program, I believe that most graduate students would benefit from reading this as a primary text within clinical neuropsychology practicum rotations. In particular, this book appears to have tremendous value for educational situations where an assigned chapter can be read and then later discussed under the direction of an experienced neuropsychologist. This book should also be considered important reading for test developers, as it provides readers with an enhanced appreciation of the need for test manuals to publish all relevant data, as clinical care may be negatively impacted in the context of incomplete reliability and validity data. With the knowledge provided by this book at their disposal, readers can be better equipped to communicate with test developers and – acting as informed consumers – can be in a better position to demand comprehensive psychometric data as a non-negotiable prerequisite for the purchase of any new assessment instruments. Returning back to the two key themes introduced in Chapter 1 – research design and rules of evidence – this book does an excellent job integrating these themes throughout most chapters in a cohesive and readable manner. Given that psychology has recently been described as being in the midst of a replication crisis, the focus on research design and rules of evidence are more relevant now than ever. As additional threats to reproducibility continue to be identified, future editions of this book may wish to expand the focus on research design to include cutting-edge topics that have become more mainstream in the last several years, such as study pre-registration, open science (e.g., sharing of data and statistical code), “researcher degrees of freedom,” and Bayesian analysis. Even without these topics, however, Neuropsychological assessment in the age of evidence-based practice is an indispensable resource that I strongly recommend reading. Reference Hannay , H. J. , Bieliauskas , L. A. , Crosson , B. A. , Hammeke , T. A. , Hamsher , K. deS. , & Koffler , S. P. ( 1998 ). Proceedings: The Houston Conference on Specialty Education and Training in Clinical Neuropsychology . Archives of Clinical Neuropsychology , 13 , 157 – 250 . doi:10.1093/arclin/13.2.160 . © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: email@example.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
Archives of Clinical Neuropsychology – Oxford University Press
Published: Aug 1, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera