Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Testing a potential alternative to traditional identification procedures: Reaction time-based concealed information test does not work for lineups with cooperative witnesses

Testing a potential alternative to traditional identification procedures: Reaction time-based... Direct eyewitness identification is widely used, but prone to error. We tested the validity of indirect eyewitness identifica- tion decisions using the reaction time-based concealed information test (CIT) for assessing cooperative eyewitnesses’ face memory as an alternative to traditional lineup procedures. In a series of five experiments, a total of 401 mock eyewitnesses watched one of 11 different stimulus events that depicted a breach of law. Eyewitness identifications in the CIT were derived from longer reaction times as compared to well-matched foil faces not encountered before. Across the five experiments, the weighted mean ee ff ct size d was 0.14 (95% CI 0.08–0.19). The reaction time-based CIT seems unsuited for testing cooperative eyewitnesses’ memory for faces. The careful matching of the faces required for a fair lineup or the lack of intent to deceive may have hampered the diagnosticity of the reaction time-based CIT. Introduction et al., 2008; Fitzgerald & Price, 2015; Steblay et al., 2011). Although proper lineup construction and administration Eyewitnesses’ memory for the face of a perpetrator is com- can increase accuracy rates (e.g., Brewer & Palmer, 2010), monly tested by means of an identification procedure, for the risk of false identifications remains and continues to be example, a lineup or showup. It is well established that eye- a major concern in the field. Scholars have recently ques- witnesses who are submitted to such a procedure can help tioned researchers’ sustained confinement to the traditional solving a crime by pointing out the actual perpetrator, but eyewitness identification paradigm (Brewer & Wells, 2011; it is equally well known that eyewitnesses can err. In the Wells et al., 2006). More specifically, it has been argued worst case, a wrongful identification decision can lead to that existing research may not be radical enough, with new a wrongful conviction while allowing the guilty party to procedures merely constituting adaptations of existing ones remain free and reoffend. Wrongful identifications were (Dupuis & Lindsay, 2007), rather than generating funda- involved in 70% of the wrongful convictions uncovered mentally new approaches for testing eyewitnesses’ memory by the innocence project (innocenceproject.org; cf. Kassin for faces. Existing procedures rely on explicit identification, et al., 2012; Wells et al., 1998). While identification accu - often after some deliberation. One possible source of error racy can vary widely across conditions, different meta-anal- is the constructive identification through reasoning (i.e., the yses show that, on average, accuracy for six-person lineups culprit is likely to be included and number 4 looks most like (i.e., seven answer options: all six lineup members and the him, so it must be number 4). More gross errors in explicit option to reject the lineup) revolves around 50% (e.g., Clark identification may come from uncooperative eyewitnesses that deliberately point to the wrong person (e.g., to protect someone else; being bribed; after being threatened; see * Melanie Sauerland Leach et al., 2009; Parliament & Yarmey, 2002). In other melanie.sauerland@maastrichtuniversity.nl words, explicit identification is prone to subtle biases in Section Forensic Psychology, Department of Clinical human decision making and strategic misidentification. Psychological Science, Faculty of Psychology One alternative might be to rely on indirect measures. Such and Neuroscience, Maastricht University, P.O. Box 616, responses are attractive in the sense that they may be unin- 6200 MD Maastricht, The Netherlands 2 tentional, uncontrollable, goal independent, autonomous, Department of Clinical Psychology, University purely stimulus driven, unconscious, efficient, or fast (Moors of Amsterdam, Amsterdam, The Netherlands Vol:.(1234567890) 1 3 Psychological Research (2019) 83:1210–1222 1211 & De Houwer, 2006). First evidence supporting the idea that press the YES button whenever encountering the target (a indirect measures can provide information about face recog- red Maserati) and the NO button for all other cars. For the nition comes from two studies with pre-school and school innocent examinee, all NO-reaction times will be roughly children (Newcombe & Fox, 1994; Stormark 2004). In these similar. For the guilty examinee, the blue Porsche will stand studies, participants first viewed a slide show of previously out and grab attention. Longer reaction times for the blue familiar faces (playmates or previous classmates) and unfa- Porsche as compared to the other NO-reaction times pro- miliar faces, while their skin conductance, heart rate, or vides an index of recognition. After the initial validation both were recorded. Subsequently, direct face recognition of the reaction time-based CIT (Farwell & Donchin, 1991, responses were collected. Although both direct and indirect Seymour & Kerlin, 2008; Seymour & Schumacher, 2009; measures scored above chance in both studies, the indirect Seymour et al., 2000), several recent well-powered studies measures outperformed direct recognition decisions. In the have confirmed its diagnostic efficiency (Kleinberg & Ver - current line of research, we embraced the call for exploring schuere, 2015, 2016; Verschuere et al., 2015, 2016; for a a potential adaption of the identification procedure in a ven- discussion of its boundary conditions and limitations, see ture that tested an indirect index of eyewitness identification: Verschuere et al., 2011; and Meijer et al., 2016). the concealed information test (CIT; Lykken 1959). Meijer et al. (2007) conducted two studies to examine The CIT is a well-established memory detection tech- whether the ERP-based CIT is sensitive for concealed face nique (for a comprehensive review, see Verschuere et al., recognition. In their first experiment, the CIT was capable 2011). At first, the CIT looks much like a multiple-choice of picking up recognition of the faces of siblings and close examination, presenting the examinee with the correct friends. In their second experiment, the CIT did not show answer embedded amongst a series of incorrect answers. students’ recognition of their faculty professor faces. Sey- The CIT is used when the examinee may not be able or mour and Kerlin (2008) had participants memorize a set willing to explicitly identify the correct alternative, and, of previously unknown faces, and the reaction time-based therefore, does not rely upon an explicit answer but rather CIT showed high accuracy in concealed face recognition. on more automatic responses to determine recognition. The stimuli used in these studies, however, were not typical Suppose an exclusive blue Porsche has been stolen, and the of eyewitness identification, because the correct faces were police has a suspect that denies any involvement or knowl- either very familiar or well memorized rather than inciden- edge about that theft. The suspect of the car theft could be tally encountered as in the case of eyewitnesses. In addition, asked about the stolen car: Was it ….a white Bentley?...A they were not matched in terms of their outer appearance. As green Mercedes?...A blue Porsche?...A yellow Ferrari?...A such, they would not meet requirements of a formal identifi - black Jaguar? Stronger (e.g., electrodermal) responding to cation procedure in an investigation (cf. Technical Working the actual stolen car compared to the other cars, is taken as Group for Eyewitness Evidence 1999; Wells et al., 1998). an index of recognition. When combining several questions, More specifically, Wells et al.’s (1998) rule 3 concerning the the CIT can detect concealed recognition with high validity. structure of lineups and photospreads of states: Reviewing a range of indices, varying from event-related The suspect should not stand out in the lineup or potentials (ERPs) to reaction times, Meijer et  al. (2016) photospread as being different from the distractors reported the diagnostic efficiency of the CIT (i.e., the area based on the eyewitness’s previous description of the under the curve) to be around 0.82–0.94. This means that culprit or based on other factors that would draw extra in such studies, a randomly chosen person with recognition attention to the suspect. (p. 630). has an 82–94% chance to respond stronger in the CIT than a randomly chosen person without recognition. In recent This rule is further specified with the fit-description cri- years, there is growing interest in the use of reaction times as terion which stresses the importance that distractors should the response measure in the CIT (for a review, see Suchotzki fit the eyewitness’s verbal description of the perpetrator et al., 2017). Response times can be administered and ana- (Technical Working Group for Eyewitness Evidence 1999; lyzed cost and time efficient, requiring a single computer. In Wells et al., 1998). Thus, when the eyewitness describes the the reaction time-based-CIT, the answer alternatives are pre- perpetrator as ‘young, white female, blond hair’, the lineup sented briefly, one by one, on the computer screen. To assure should consist of young white females with blond hair. attention to the stimuli, the examinee engages in a binary Lefebvre et al. (2007) were the first to propose the CIT classification task, pressing a unique button for a set of stim- for the purpose of eyewitness identification, namely, to use uli learned just before the test (i.e., the targets) and another incidentally encountered faces, and to match faces follow- button for all other stimuli (including the correct answer or ing guidelines for eyewitness identification. Participants probe, as well as all foils, called irrelevants). Building on watched four mock crimes across two testing sessions. In the example above, the examinee may be explained that the the perpetrator-present conditions, participants were pre- CIT will examine recognition of the stolen car, and asked to sented with the photograph of the perpetrator, the victim, 1 3 1212 Psychological Research (2019) 83:1210–1222 and five foils, one by one, on the computer screen, while CIT was presented first, to obtain CIT performance that electrophysiological recordings were made. Deviating from was unimpaired by participants’ lineup decision. The use of the classic CIT procedure, participants could respond to only one stimulus film in Experiment 1 raised the question each picture by pressing one of three buttons, indicat- whether diverging findings could be attributed to certain ing that this picture depicted the perpetrator, the victim, roles the film featured (i.e., more attention paid to the thief or another person. In other words, participants made an than a bystander) or characteristics of certain actors (e.g., explicit identification in this ERP-based CIT. The CIT higher or lower distinctiveness). Therefore, we used differ - revealed recognition of the perpetrator, and so did explicit ent stimulus film versions for all subsequent experiments identification. While the results point to the potential of the in which actors switched roles across versions, while the CIT for cooperative eyewitness identification, the electro- plot was identical. physiological index of recognition may have been evoked Following null findings and contradictory results in by the explicit identification. In a second ERP-based CIT Experiments 1 and 2, and emerging insights into the valid- study (Lefebvre et al., 2009), the effects were replicated, ity of the reaction time-based CIT, we realized that we may but also extended by examining the role of active conceal- have used a suboptimal CIT protocol. Indeed, Verschuere ment. In the deceptive condition, participants concealed et al. (2015) showed that using a separate CIT per probe the identity of the perpetrator from the experimenters by (i.e., one for victim, one for thief, etc.) reduced accuracy pressing the button that corresponded with an innocent and that it is recommended to use one CIT that presents all individual, rather than perpetrator. Results confirmed the items completely intermixed (see also Lukasz et al., 2017). earlier finding, showing that even when trying to conceal In Experiments 3–5, we, therefore, administered such a their knowledge, the CIT revealed recognition of the per- multiple-probe CIT, in which all probes, that is, all actors petrator’s face. that appeared in the stimulus event and all corresponding Taken together, there is preliminary evidence that the irrelevant items were presented in random order. Following ERP-based CIT may be useful for testing the facial memory small effect sizes in Experiment 3, we considered the possi- of cooperative eyewitnesses. In the present research line, bility that our stimulus films had not allowed for sufficient we examined whether the findings extend to the reaction encoding of the actors’ faces. We, therefore, prepared a time-based CIT, which is much easier to apply. This was less complex stimulus film with only two actors and opti- tested in a series of five experiments. We expected that the mal viewing conditions (long facial viewing time, includ- recognition of a face previously encountered in a stimulus ing close-ups, for both actors) for Experiment 4. Indeed, event (probes) would be reflected in longer reaction times, small but significant effects materialized for the two actors compared to reaction times for irrelevants. (thief and victim) in this experiment. The final experiment (Experiment 5) additionally addressed three issues: for one, Experiment 5 included an additional practice block Overview of the studies and a minimum proportion of accurate reactions during practice before a participant could move on to the actual Participants witnessed a crime involving one or more indi- CIT. Second, a virtual reality event was used instead of a viduals. The subsequent reaction time-based CIT assessed real life film, to be able to better control the actions and face recognition of the individuals involved in the crime. exposure of the subjects featured in the mock crime and to Using the classic CIT procedure, participants pressed one offer participants a more realistic experience of the mock specific key for all stimuli (i.e., irrelevants and probes), crime (cf. Gorini et al., 2007; Kim et al., 2014; Riva 2005; except for the target stimulus that was memorized prior Schultheis & Rizzo, 2001). Finally, we included two con- to the CIT. The progression of the five conducted experi- trol objects in the stimulus event. Finding an effect for the ments can be described as follows: in Experiment 1, one objects but not the faces would replicate earlier findings stimulus film depicting four actors who played a thief, a concerning objects (e.g., Suchotzki et al., 2014; Verschuere victim, and two bystanders was used. The lineup referring et al., 2004; Visu-Petra et al., 2012), showing the validity to each actor was presented prior to the referring CIT to of the CIT for objects and strengthen the conclusion of receive a lineup performance measure that was unimpaired the absence of an effect for lineup faces. Anticipating the by CIT presentation. In all subsequent experiments, the results, we found a CIT effect for objects, but not lineup faces. Comparison of the methodology in the current stud- ies and CIT research in memory detection in suspects Note that unlike common terminology in eyewitness identification opens new perspectives on when reaction time-based CIT studies, the target does not denote the person seen during the stimu- serve as a useful tool to diagnose face recognition in coop- lus event. Rather, this individual is dubbed probe, whereas the term erative eyewitnesses. target describes the person to whom the participant has to react differ - ently in the CIT. 1 3 Psychological Research (2019) 83:1210–1222 1213 Experiments 2 and 3 For these studies, four different stimu- Method lus film versions depicting the theft of a purse in a bar were used. Across film versions, the four female actors switched Data are publically available using the following link: the roles of the thief, accomplice, victim, and a bystander, http://hdl.handle.net/10411/2MUUTT. while the plots were identical. This was to avoid possible confounding effects of actor and role. For example, if only Participants one film version is used, it is unclear whether an effect might be attributable to the characteristics of a particular person In total, 436 participants were tested, 35 of whom were (i.e., distinctive features) or role (e.g., more attention paid excluded. More specifically, these participants did not to the thief than a bystander). All versions lasted approxi- press the accurate key (i.e., left shift key for targets, right mately 3:20  min. A detailed description can be found in shift key for irrelevants, and probes or vice versa) in 50% Sauerland et al. (2014). of the trials on one response category (i.e., responses to probes, targets, or irrelevants; following Lukasz et  al., Experiment 4 Two film versions depicting the theft of a cell 2017; Kleinberg & Verschuere, 2015, 2016; Verschuere phone, involving a thief and a victim, were created (duration & Kleinberg, 2016; Verschuere et al., 2015). The numbers 1:13 min). Analogous to Experiments 2 and 3, the two female of included participants in Experiments 1–5 were 55, 107, actors switched the roles of the thief and victim across film 84, 75, and 80, respectively (N = 401; 299 women and 102 versions. The action can be described as follows: a young men, M 21.44 years, SD 2.48). Participants were mostly age woman (i.e., the subsequent thief) rushes from a cafeteria to Bachelor (88.0%) or Master students (9.7%) who studied the train station when she runs into another young woman at the Faculties of Psychology and Neuroscience (80.9%), (i.e., the subsequent victim), resulting in both of their bags Health, Medicine and Life Sciences (6.6%), the School falling on the ground. While the thief yells angrily at the of Business and Economics (4.3%), or other (8.2%). The victim, both pick up their bags and their contents that had most common native languages were German (46.5%) and fallen; then they walk away. When the victim searches her Dutch (31.2%; for Experiments 2–5; native language was phone in her bag, she cannot find it and runs after the thief. not assessed in Experiment 1). Participants received study The thief is seen running towards the train holding the vic- credit or a gift voucher in return for their participation. tim’s phone in her hand. The research line was approved by the research board of the faculty. Experiment 5 This experiment used a virtual reality event as stimulus event. This allowed for more control over the actions and exposure of the individuals and objects. In this Design 1:05 min event, two young women walk through a lighted city street at night. One (woman 1) plays music on her A within-subjects factorial design contrasting reaction phone, while the other one (woman 2) drops a coke bottle times to probes vs. irrelevant faces was employed for all (object 1). Woman 1 walks up to a parked car and jumps experiments. on the motor hood to dance. Across the street, the observer sees a building with a neon Casino sign (object 2). Woman Materials 2 dances next to the car and later kicks off one of its side mirrors. When a car drives around the corner, both women Stimulus events run away. The faces of both women can be seen for most of the duration of the film. Each of the two roles could be Four different stimulus events were used. They depicted a played by two avatars each, resulting in four identical event theft (Experiments 1–4) or the vandalism of a car (Experi- versions with the avatar constellations AB, Ab, aB, and ab. ment 5) and included one or two perpetrators, a victim, and sometimes one or two bystanders. The number of actors Reaction Time-Based Concealed Information Test involved in the events was either four (Experiments 1–3) or two (Experiments 4 and 5). In the beginning of the CIT, participants are instructed to press the right shift key as fast as possible in response to Experiment 1 The first stimulus film involved four actors a facial stimulus, with one exception, the target. For this (thief, victim, two bystanders) and depicted the theft of a stimulus, they should press the left shift key rather than the wallet in a student cafeteria (duration: 5:05 min). A detailed right one. Participants are then presented with the target for description can be found in Sauerland, Sagana, and Sporer 30 s, accompanied by instructions to encode this face. In (2012). a practice block, participants were provided with feedback 1 3 1214 Psychological Research (2019) 83:1210–1222 Table 1 Methodological specifics of five experiments Experiment 1 Experiment 2 Experiment 3 Experiment 4 Experiment 5 N 55 107 84 75 80 Cover story Yes Yes Yes No No Stimulus event Staged mock video Staged mock video Staged mock video Staged mock video Virtual reality event Duration of event 5:05 3:20 3:20 1:13 1:05 Event versions 1 4 4 (same as Exp 2) 2 4 Number of actors 4 4 4 2 4 Number of roles in stimulus event 4 4 4 2 2 Number of objects included 0 0 0 0 2 Number of practice blocks 4 (1 per role) 4 (1 per role) 1 1 2 CITs 4 (1 per role) 4 (1 per role) 1 1 1 CIT test protocol 1 person 1 person Multiple persons Multiple persons Multiple persons Number of stimulus presentations 20 20 21 21 21 (blocks) Number of trials 560 (140 per CIT) 560 (140 per CIT) 588 294 588 Lineup presentation Before each CIT After each CIT After CIT After CIT After CIT (good, wrong, or too slow). All CIT stimuli were shown four probes, four targets, and 4 × 5 = 20 irrelevants. The twice and participants were given 1500 ms to react before CIT of Experiment 4 included two probes, two targets, and the next stimulus was shown following an inter-stimulus 2 × 5 = 10 irrelevants. The CIT of Experiment 5 included two interval of 1000 ms. The size of the facial stimuli was 260 facial probes and two object probes, two facial targets and pixels × approximately 220 pixels. In Experiment 5, two two object targets, 2 × 5 = 10 facial irrelevants, and 2 × 5 = 10 practice blocks (rather than one) were conducted, with an object irrelevants. optional third one if participants had more than 50% errors or misses in the second practice block. This served to decrease CIT and lineup photos the number of wrongful responses and subsequent exclusion experienced in the former experiments. Following the prac- Facial pictures For taking the facial pictures of probes, tar- tice block, the experimenter left the room and the actual task gets, and irrelevants, individuals took jewelry, eyeglasses, began. Every stimulus was presented 21 times (Experiments and hair accessories off and wore their hair loose. The cloth- 1 and 2: 20 times) with presentations in random order, result- ing of each person differed from one another and the probes ing in 294 to 588 trials. In Experiments 4 and 5, the question additionally wore different clothing in the film than in the “Do you recognize this?” above every stimulus and the labels photograph. The photographs were taken against a white “YES” and “NO” on the left and right sides of the screen wall and edited to display a person from the collarbone up. were added. This was to increase the difficulty of inhibit- The selected pictures fit the general description of the actors ing a left (YES) response, while the participant actually did depicted in the different stimulus events (i.e., the probes). recognize the face. In this phase, no feedback was given. More specifically, for each actor, six matching pictures were The use of the left vs. right shift key was counterbalanced selected. One of these pictures was selected to serve as tar- across participants. The methodological specic fi s of the CITs get and the remaining five pictures served as irrelevants in of each experiment are summarized in Table 1. the CIT. In Experiments 1 and 3, one target was pre-selected In Experiments 1 and 2, the CIT stimuli presented at random for all participants, whereas in the other experi- included one probe (i.e., the face of one of the persons seen ments, a target was randomly selected for each participant. in the stimulus film), a target (the face participants were For the virtual reality event, seven avatars that matched in instructed to encode at the beginning of the CIT), and their general person description were created for each of the five irrelevants (i.e., foils). Participants were successively two roles (i.e., 14 avatars). Two avatars each were selected presented with four different CITs, one for each probe. In to appear in the different stimulus event versions (i.e., four Experiments 3–5, only one CIT was administered, which avatars). Analogous to Experiments 1–4, the avatars from included multiple probes, namely, all of the individuals the stimulus event served as probes, one avatar each served that they had seen in the stimulus film. For Experiment 3, as target, and the remaining avatars served as irrelevants. this means that the pictures presented in the CIT included 1 3 Psychological Research (2019) 83:1210–1222 1215 Object pictures Fourteen object pictures were created for and gave the participants the chance to get used to being in a Experiment 5. The pictures of a coke bottle and a casino virtual reality environment. Then, the CIT task was started. sign were expected to be salient stimuli in the stimulus The final part of the experiment was the administration of event (Kleinberg & Verschuere, 2015; Lieblich et al., 1976). the lineups; one for each person or avatar that appeared in Six additional objects falling into the categories consumer the stimulus event. Deviating from the described procedure, goods (hamburger, pack of cigarettes, can of beer, chocolate in Experiment 1, each of the four CITs was preceded by the bar, bag of French fries, and bottle of Whiskey) and façade matching lineup, whereas in Experiment 2, each of the four decoration (Hotel sign, Advent wreath, Dutch flag, Art show lineups was preceded by the matching CIT. In Experiment 1, sign, carnival garland, and occupation banner reading “This only actor-present lineups were used; in Experiments 2 and is ours”) were created to serve as targets and irrelevants 3, the thief and bystander 1 lineup were either both present (foils). The objects that served as targets/irrelevants were or both absent, as were the victim and bystander 2 lineup randomly selected for each participant. (i.e., two lineups were always absent, and two were present); and in Experiments 4 and 5, actor presence was completely Lineups and lineup construction counterbalanced. In Experiment 1, the sequence of the line- ups was fixed (Thief-Victim-Bystander 1-Bystander 2); in The facial pictures described above were used to construct the Experiments 2 and 3, we used a Latin square (Thief-Victim- actor-present and actor-absent lineups. Lineups were com- Bystander 1-Bystander 2 vs. Victim-Bystander 1-Bystander posed of six photographs numbered 1–6 that were arranged 2-Thief vs. Bystander 1-Bystander 2-Thief-Victim etc.); in in two rows of three pictures (i.e., a simultaneous lineup). All Experiment 4, lineup order (thief-victim vs. victim-thief) distractors and the replacement (i.e., extra distractor added was counterbalanced; and in Experiment 5, lineup order was to actor-absent lineups) fitted the general descriptions of random. Testing sessions lasted approximately 30–40 min. the referring actor, as determined by presenting independ- The debriefing followed after termination of data collection. ent samples of mock witnesses (ns between 20 and 31) who A summary of the procedural specifics of the CITs of each had not viewed the stimulus event with a description of each experiment can be found in Table 1. actor together with the referring lineup (e.g., ‘She is about 20 years old. She has long, brown hair. She has a slim to normal figure’). These mock witnesses were then asked to Results select the person from the lineup who matched the descrip- tion best (Doob & Kirshenbaum, 1973). Effective lineup CIT data preparation and overview of analyses sizes for actor-present and actor-absent lineups, determined as Tredoux’s Es, were satisfactory and ranged from 3.2 to 5.6 Prior to data analyses, those trials with wrongful responses of a possible 6 (M 4.2; Tredoux 1998, 1999). and reaction times faster than 150 ms (i.e., inattentive respond- ing) or slower than 1500 ms were removed from the data set. Procedure Next, data were aggregated to result in the average reaction times per stimulus type per participant and probe (e.g., for Participants signed the informed consent form and provided Experiment 1, 2 × 4 variables would be computed: the mean demographic data. Before watching the stimulus event, par- reaction times to the probes and irrelevants, referring to the ticipants in Experiments 1–3 were instructed to pay close thief, victim, bystander 1, and bystander 2). For each experi- attention to the film, because they would be asked ques- ment, a paired sample t test contrasting probes vs. irrelevants tions about it later on. In Experiments 4 and 5, participants was computed per role. Finally, a weighted mean estimate of were instructed to pay particular attention to the faces and to the effect size across all five studies was established. encode them as detailed as they could. In Experiment 5, par- ticipants were given additional instructions about the use of Following previous work in the field, we had initially also removed the virtual reality goggles and handed headphones once they all response times slower than 800 ms to account for possible inatten- had put on the goggles. They then first saw an orientation tive responding or strategic slowing (Kleinberg & Verschuere 2015; Noordraven & Verschuere 2013; Verschuere & Kleinberg 2015). Fol- environment, which consisted of a big open space, and which lowing the advice of reviewer Laura Visu-Petra, we reanalyzed the allowed them to check if the goggles were placed correctly data with a longer deadline of 1500  ms (cf. Seymour et  al. 2000). Unexpectedly this led to better results, possibly because processing The literature commonly refers to target-present and target-absent time for faces can often be longer than 800 ms (Ramon et al. 2011), lineups. This terminology interferes with the CIT terminology, and has been shown to be longer than for words (e.g., Ovaysikia, though, in which the probe is the person previously seen, not the tar- Tahir, Chan, & DeSouza, 2011) which served as stimuli in previous get. Therefore, we refer to actor-present and actor-absent lineups in CIT studies. We therefore report the findings of the latter analyses. To this article when referring to lineups that do or do not include the per- enable comparison with previous studies using the 800  ms deadline, son participants saw in the stimulus event. these findings are reported in Table 4 in Appendix. 1 3 1216 Psychological Research (2019) 83:1210–1222 Table 2 Reaction times, standard deviations, and inferential statistics for the pairwise comparisons of the reaction times for probes and irrelevant stimuli (including reaction times 150–1500 ms) Study Role Played by actor df t d p Mean response time in ms (SD) Probes Irrelevants 1 Thief A 54 − 0.51 − 0.07 .610 421 (76) 424 (69) Victim B 54 3.72 0.50 < .001 466 (84) 444 (64) Bystander 1 C 54 0.68 0.05 .499 426 (84) 423 (68) Bystander 2 D 54 6.17 0.83 < .001 454 (74) 419 (64) 2 Thief EFGH 106 − 0.11 − 0.01 .913 374 (70) 374 (58) Victim EFGH 106 − 2.16 − 0.21 .033 369 (64) 376 (58) Bystander 1 EFGH 106 − 1.41 − 0.14 .161 367 (61) 371 (53) Bystander 2 EFGH 106 − 0.34 − 0.03 .734 370 (63) 371 (56) 3 Thief EFGH 83 1.70 0.19 .093 494 (76) 485 (55) Victim EFGH 83 2.16 0.24 .034 497 (76) 484 (53) Bystander 1 EFGH 83 0.96 0.10 .341 486 (72) 481 (57) Bystander 2 EFGH 83 2.09 0.23 .040 497 (81) 484 (54) 4 Thief IJ 74 2.48 0.29 .015 479 (64) 466 (51) Victim IJ 74 2.12 0.25 .037 479 (61) 469 (55) 5 Woman 1 KL 79 − 0.99 − 0.11 .324 545 (81) 551 (68) Woman 2 MN 79 2.23 0.25 .029 535 (77) 521 (62) All 5 studies Across roles 0.14 (0.08, 0.19) Experiments 2–5 Across roles 0.10 (0.05, 0.16) studies, a very small effect size materialized. We reran the Comparison of reaction times to probe meta-analysis excluding Experiment 1. This was to account and irrelevants for the fact that in this experiment, the CIT outcome may have been impacted by the preceding lineup task, a pro- Across the five experiments, we conducted 16 tests to com- cedural detail that may be sufficient to create a deviant pare probe vs. irrelevant reaction times. Eight of the tests response in the subsequent CIT. The yielded average effect showed no effect (|d | ≤ 0.20), and five tests displayed small size across Experiments 2–5 was 0.10 (95% confidence effects into the expected direction; one a moderate and one interval: 0.05; 0.16), a very small effect. a large effect into the expected direction. One test showed a In addition, we reran the meta-analyses including only small effect in the opposite direction. Table  2 provides the those participants who correctly identified the actor from mean reaction times (and SDs) in response to facial probes an actor-present lineup. The results showed a small aver- and irrelevants and the inferential statistics. age effect size when looking at all five experiments [mean In Experiment 5, two control objects were included in d = 0.38 (95% confidence interval: 0.24; 0.52)], whereas the CIT. Replicating earlier findings, the reaction times for there was a very small effect, on average, when Experiment probes were slower (M 454 ms, SD 58; M 451 ms, O1 O1 O2 1 was excluded [mean d = 0.15 (95% confidence interval SD 48; M 453 ms, SD 47) than for irrel- O2 collapsed collapsed − 0.06; 0.36)]. evant stimuli (M 442 ms, SD 40; M 440 ms, SD O1 O1 O2 O2 38; M 441 ms, SD 38), t(79) = 2.63, p = .010, collapsed collapsed Eyewitness identification performance d = 0.29 (object 1), t(79) = 3.50, p = .001, d = 0.39 (object 2), from traditional lineups t(79) = 4.26, p < .001, d = 0.48 (collapsed). Table  3 shows the identification accuracy rates split by Meta‑analysis across five studies experiments and probes. The data concerning Experiments 2–5 must be treated with caution. This is because in these The five studies together yielded 16 effect size estimates. experiments, the lineup task was preceded by the CIT task. Using the reciprocal of the sampling variances as weights This familiarizes participants with the stimuli presented (cf. Gibbons et al., 1993), a weighted mean estimate of the in the subsequent lineup and possibly introduces uncon- effect size yielded an average effect size of 0.14 (95% con- scious transference effects (Deffenbacher et al., 2006). As fidence interval: 0.08; 0.19), indicating that across the five 1 3 Psychological Research (2019) 83:1210–1222 1217 1 3 Table 3 Identification accuracy rates for different roles across five experiments Role Identification accuracy (%) Proportion of do not know responses (%) Experiment 1  Thief 60.4 12.7  Victim 92.2 7.3  Bystander 1 73.5 10.9  Bystander 2 18.2 20.0  Across roles 62.5 12.7 Experiment 2  Thief 44.0 29.9  Victim 34.7 32.7  Bystander 1 50.0 34.6  Bystander 2 34.7 32.7  Across roles 40.8 32.5 Experiment 3  Thief 30.7 26.2  Victim 37.8 46.4  Bystander 1 35.7 33.3  Bystander 2 39.6 36.9  Across roles 35.6 35.7 Experiment 4  Thief 69.2 13.3  Victim 63.1 13.3  Across roles 66.2 13.3 Experiment 5  Woman 1 26.7 25.0  Woman 2 19.6 30.0  Object A 67.9 30.0  Object B 59.0 51.3  Across roles and stimuli 41.7 34.1 The data concerning Experiments 2–5 must be treated with caution, because the CIT task preceded the lineup task. This familiarizes participants with the stimuli presented in the subsequent lineup and possibly introduces unconscious transference effects (Deffenbacher et al., 2006). As a consequence, the identification task may have been quite difficult Calculations of identification accuracy include positive and negative identification decisions made, but exclude do not know responses (i.e., number of accurate responses divided by number of accurate + inaccurate responses) 1218 Psychological Research (2019) 83:1210–1222 a consequence, the identification task may have been quite in the previous experiments using the same stimulus film difficult. This decision was made to avoid contamination (Sagana et al., 2015, 2014, Experiments 2a–c, 3; Sauer- of the CIT outcomes, which was the focus of this line of land et al., 2012). In addition, results from a previous study research. The results support this notion. In Experiments deem the likelihood that the stimulus persons used in our 2, 3, and 5, identification accuracy was somewhat lower experiments were particularly difficult to encode unlikely. (around 40%), compared to Experiment 1, where the iden- Specifically, the films used in Experiments 2 and 3 served tification measure was not challenged by a preceding CIT as stimulus materials in a study looking at eyewitnesses’ (63% accuracy on average). Furthermore, the proportion of memory reports (Sauerland et al., 2014). Collapsed across don’t know answers, which can be taken as an indication of different recall conditions, participants reported on average, difficulty of the task, was higher in Experiments 2, 3, and 5 about 53 person details (i.e., details referring to the appear- (33–36%), compared to Experiment 1 (13%). Experiment 4 ance of the individuals shown in the film, including facial constitutes an outlier in the sense that identification accuracy details, description of clothing, build etc.), of which, on rates were equally high (or, if anything, even higher: 66%) average, 73% were accurate. Together, these findings do not and do not know responses equally low (13%) as in Experi- seem to support the notion that it was particularly dic ffi ult to ment 1. This might be the result of our attempts to create a encode the actors shown in our stimulus films. Finally, while less complex stimulus film with only two actors and optimal rerunning our meta-analyses including only participants who viewing conditions, making the identification less difficult, correctly identified the actor from an actor-present lineup compared to all other experiments. increased the average effect size, this increase was carried by Experiment 1. It appears that viewing the lineup prior to the CIT—which was only the case in Experiment 1— Discussion improved CIT performance. Accordingly, it seems most appropriate to consider the average effect sizes excluding It was the aim of the current line of research to test an alter- Experiment 1 as true effect of the reaction time-based CIT. native to traditional, explicit lineup identification for test- These effect sizes were very small (including all partici - ing cooperative eyewitnesses’ memory for faces, using an pants from Experiments 2–5: d = 0.05; including only par- indirect measure of face recognition. To this end, we trans- ticipants who accurately identified the actor from the lineup ferred the reaction time-based CIT methodology that is well in Experiments 2–5: d = 0.15), regardless of accurate actor established in the field of memory detection in suspects to identification. This confirms our conclusion that a reaction the field of eyewitness identification. The idea that reaction time-based CIT does not work for lineups, even if explicit times in a CIT task should be greater for faces that were recognition occurred. previously encountered in a stimulus event as compared to Second, a more likely explanation for our findings irrelevant foils was tested in a series of five experiments. concerns the careful matching of the employed faces, as The methodology of the studies sequentially progressed required by eyewitness identification procedural guidelines and addressed possible explanations for non-significant and (e.g., Wells et al., 1998). Lineup pictures were deliberately inconsistent findings. Across 16 reaction time comparisons, selected to match the general description of the probes, lead- seven were in favor of reaction time-based CIT predictions, ing to matched hair color and length, body type, and age. whereas half of the tests returned no significant effects and In fact, during debriefing, many participants spontaneously one effect was opposite to our expectations. A meta-analy - commented on the resemblance of the different stimulus sis showed that the overall effect size was very small. Our faces. While the selection of individuals that match in their findings do not support the use of the reaction time-based general description is a necessity in lineup construction, it CIT for testing cooperative eyewitnesses’ facial recognition might be obstructive for the CIT. Indeed, it was found that memory. These findings contrast with the finding that the the more the irrelevants resemble the probe, the smaller the ERP-based CIT may be useful for eyewitness identification CIT effect (Ben-Shakhar & Gati, 1987). This may explain (Lefebvre et al., 2007, 2009). At least three explanations why Seymour and Kerlin (2008; see also Meijer et al., 2007) need to be considered for this apparent discrepancy. did find the reaction time-based CIT to be responsive to First, it is possible that the stimulus event did not allow face recognition. They selected their facial stimuli from the for sufficiently deep encoding of the faces. We think that this Aberdeen Psychological Image Collection, which contains explanation is unlikely, because the considerable identifica - pictures of 116 people that have not been selected to match tion accuracy rates in Experiment 1—where the lineups were any criteria. While Lefebvre et al.’s facial stimuli (2007, presented prior to the CIT—are in line with accuracy rates 2009) were matched for some attributes, such as gender, age, reported in the literature (e.g., Clark et al., 2008; Fitzgerald race, and hair length, no information was given about other & Price 2015; Steblay et al., 2011) and with those reported features such as hair color or hair style, and no measures 1 3 Psychological Research (2019) 83:1210–1222 1219 of effective lineup size were provided. Thus, it is possible To summarize, the results of the presented five experi- that the conditions for creating a fair lineup and creating an ments indicate that the reaction time-based CIT is not a effective CIT are mutually exclusive. This notion was also valid means of testing facial recognition in cooperative confirmed by our findings referring to objects in Experiment eyewitnesses with matched faces. The findings indicate 5. Here, the expected CIT effect was found. The fact that the that it is important to map how stimulus distinctiveness crime-related objects (e.g., Hotel sign) were quite distinct affects the validity of the reaction time-based CIT. from the irrelevant foils (e.g., Advent wreath, Dutch flag, Acknowledgements We thank Nick J. Broers for running the meta- Art show sign, carnival garland, occupation banner reading analyses; Chaim Kuhnreich, Lena Leven, and Lisa Lewandowski for “This is ours”) may have contributed to the CIT effect for their help in collecting the data, Katerina Georgiadou for preparing the the objects. One way to test this idea would be by conducting data for analyses, Jacco Ronner for creating the Presentation tasks, and a study with closely matched objects or with non-matched Richard Benning for creating the virtual reality event. faces. Third, our findings are in line with the emerging idea Compliance with ethical standards that different psychological processes may underlie the Conflict of interest All authors declare that they have no conflict of reaction time-based CIT and the ERP-based CIT (klein interest. Selle et al., 2017). Lefebvre et al. (2007, 2009) provided evidence that the ERP-based CIT is sensitive to face rec- Ethical standards All procedures performed in studies involving human participants were in accordance with the ethical standards of the ognition, independent of active concealment attempts. institutional research committee and with the 1964 Helsinki declaration Our series of studies points to the possibility that the reac- and its later amendments or comparable ethical standards. tion time-based CIT critically depends on active conceal- ment, explaining our observed null effects in cooperative Informed consent Informed consent was obtained from all individual participants included in the study. witnesses. This reasoning is supported by Suchotzki et al. (2015) who suggested that the reaction times increase to probes reflects response inhibition (see also Seymour Open Access This article is distributed under the terms of the Creative & Schumacher, 2009; Verschuere & De Houwer, 2011). Commons Attribution 4.0 International License (http://creativecom- mons.org/licenses/by/4.0/), which permits unrestricted use, distribu- Suchotzki et al. (2015) observed a reaction time-based tion, and reproduction in any medium, provided you give appropriate CIT effect only when mock crime participants attempted credit to the original author(s) and the source, provide a link to the to hide crime knowledge, but not when admitting crime Creative Commons license, and indicate if changes were made. knowledge. Thus, it is possible that stronger forms of active deception may be crucial for obtaining the reaction time-based CIT effect, than achieved here. This leads to Appendix A the intriguing possibility that (1) CIT measures that do not depend on active deception—electrodermal respond- See Table 4. ing and the P300 ERP may be effective in both coopera- tive and non-cooperative eyewitnesses and (2) the reac- tion time-based CIT may be effective in non-cooperative (i.e., deceptive) eyewitnesses (cf. Lefebvre et al., 2009). We thank an anonymous reviewer for bringing this point to our attention. 1 3 1220 Psychological Research (2019) 83:1210–1222 1 3 Table 4 Reaction times, standard deviations, and inferential statistics for the pairwise comparisons of the reaction times for probes and irrelevant stimuli (including reaction times 150–800 ms) Study Role Played by actor df t d p Mean response time in ms (SD) Probes Irrelevants 1 Thief A 51 − 2.78 − 0.38 .008 397 (46) 408 (52) Victim B 51 3.93 0.54 < .001 445 (64) 427 (55) Bystander 1 C 51 − 0.90 − 0.12 .371 408 (66) 410 (57) Bystander 2 D 51 7.73 1.07 < .001 443 (65) 409 (54) 2 Thief EFGH 106 − 0.36 − 0.03 .721 368 (63) 369 (53) Victim EFGH 106 − 2.30 − 0.22 .023 364 (57) 371 (52) Bystander 1 EFGH 106 − 1.22 − 0.12 .227 364 (58) 367 (50) Bystander 2 EFGH 106 − 0.55 − 0.05 .582 365 (57) 366 (51) 3 Thief EFGH 74 0.16 0.02 .870 473 (47) 473 (45) Victim EFGH 74 1.81 0.21 .075 478 (50) 471 (44) Bystander 1 EFGH 74 0.19 0.02 .850 468 (54) 467 (45) Bystander 2 EFGH 74 1.14 0.13 .259 478 (60) 472 (47) 4 Thief IJ 74 2.08 0.24 .041 467 (55) 459 (47) Victim IJ 74 2.21 0.26 .030 468 (56) 460 (50) 5 Woman 1 KL 76 − 0.33 − 0.04 .746 519 (57) 520 (50) Woman 2 MN 76 1.71 0.19 .091 507 (51) 501 (49) All 5 studies Across roles 0.07 (0.02, 0.13) Experiments 2–5 Across roles 0.05 (− 0.00, 0.11) Psychological Research (2019) 83:1210–1222 1221 Lefebvre, C. D., Marchand, Y., Smith, S. M., & Connolly, J. F. (2007). References Determining eyewitness identification accuracy using event- related brain potentials (ERPs). Psychophysiology, 44, 894–904. Ben-Shakhar, G., & Gati, I. (1987). Common and distinctive features of https://doi.org/10.1111/j.1469-8986.2007.00566.x. verbal and pictorial stimuli as determinants of psychophysiologi- Lefebvre, C. D., Marchand, Y., Smith, S. M., & Connolly, J. F. (2009). cal responsivity. Journal of Experimental Psychology General, Use of event-related brain potentials (ERPs) to assess eyewitness 116, 91–105. https://doi.org/10.1037/0096-3445.116.2.91. accuracy and deception. International Journal of Psychophysiol- Brewer, N., & Palmer, M. A. (2010). Eyewitness identification tests. ogy, 73, 218–225. https://doi.org/10.1016/j.ijpsycho.2009.03.003. Legal and Criminological Psychology, 15, 77–96. https://doi.org Lieblich, I., Ben-Shakhar, G., & Kugelmass, S. (1976). Valid- /10.1348/135532509x414765. ity of the guilty knowledge technique in a prisoner’s sam- Brewer, N., & Wells, G. L. (2011). Eyewitness identification. Cur - ple. Journal of Applied Psychology, 61, 89–93. https://doi. rent Directions in Psychological Science, 20, 24–27. https://doi. org/10.1037/0021-9010.61.1.89. org/10.1177/0963721410389169. Lukasz, G., Kleinberg, B., & Verschuere, B. (2017). Familiarity-related Clark, S. E., Howell, R. T., & Davey, S. L. (2008). Regularities in eye- filler trials increase the validity of the reaction times-based con- witness identification. Law and Human Behavior, 32, 187–218. cealed information test. Journal of Applied Research in Memory https://doi.org/10.1007/s10979-006-9082-4. and Cognition. Deffenbacher, K. A., Bornstein, B. H., & Penrod, S. D. (2006). Mug- Lykken, D. T. (1959). The GSR in the detection of guilt. Journal shot exposure effects: Retroactive interference, mugshot com- of Applied Psychology, 43, 385–388. https://doi.org/10.1037/ mitment, source confusion, and unconscious transference. Law h0046060. and Human Behavior, 30, 287–307. https://doi.org/10.1007/ Meijer, E. H., Smulders, F. T. Y., Merckelbach, H. L. G. J., & Wolf, A. s10979-006-9008-1. G. (2007). The P300 is sensitive to concealed face recognition. Doob, A. N., & Kirshenbaum, H. M. (1973). Bias in police lineups- International Journal of Psychophysiology, 66, 231–237. https:// partial remembering. Journal of Police Science and Administra- doi.org/10.1016/j.ijpsycho.2007.08.001. tion, 1, 287–293. Meijer, E. H., Verschuere, B., Gamer, M., Merckelbach, H., & Ben- Dupuis, P. R., & Lindsay, R. C. L. (2007). Radical alternatives to tra- Shakhar, G. (2016). Deception detection with behavioral, auto- ditional lineups. In R. C. L. Lindsay, D. F. Ross, J. D. Read & M. nomic, and neural measures: Conceptual and methodological con- P. Toglia (Eds.), The handbook of eyewitness psychology, Vol II: siderations that warrant modesty. Psychophysiology, 53, 593–604. Memory for people (pp. 179–200). Mahwah: Lawrence Erlbaum https://doi.org/10.1111/psyp.12609. Associates Publishers. Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and Farwell, L. A., & Donchin, E. (1991). The truth will out: Inter- conceptual analysis. Psychological Bulletin, 132, 297. https://doi. rogative polygraphy (“lie detection”) with event-related org/10.1037/0033-2909.132.2.297. brain potentials. Psychophysiology, 28, 531–547. https://doi. Newcombe, N., & Fox, N. A. (1994). Infantile amnesia: Through org/10.1111/j.1469-8986.1991.tb01990.x. a glass darkly. Child Development, 65, 31–40. https://doi. Fitzgerald, R. J., & Price, H. L. (2015). Eyewitness identification across org/10.1111/j.1467-8624.1994.tb00732.x. the life span: A meta-analysis of age differences. Psychological Noordraven, E., & Verschuere, B. (2013). Predicting the sensitivity of Bulletin, 141, 1228–1265. https://doi.org/10.1037/bul0000013. the reaction time-based concealed information test. Applied Cog- Gibbons, R. D., Hedeker, D. R., & Davis, J. M. (1993). Estimation of nitive Psychology, 27, 328–335. https://doi.org/10.1002/acp.2910. effect size from a series of experiments involving paired compari- Ovaysikia, S., Thari, K. A., Chan, J. L., & DeSouza, F. X. (2010). Word sons. Journal of Educational Statistics, 18, 271–279. https://doi. wins over face: Emotional Stroop effect activates the frontal corti- org/10.3102/10769986018003271. cal network. Frontiers in Human Neuroscience, 4, 234. https://doi. Gorini, A., Gaggioli, A., & Riva, G. (2007). Virtual worlds, org/10.3389/fnhum.2010.00234. real healing. Science, 318, 1549b. https://doi.org/10.1126/ Parliament, L., & Yarmey, A. D. (2002). Deception in eyewitness iden- science.318.5856.1549b. tic fi ation. Criminal Justice and Behavior, 29, 734–746. https://doi. Kassin, S. M., Bogart, D., & Kerner, J. (2012). Confessions that corrupt: org/10.1177/009385402237925. Evidence from the DNA exoneration case files. Psychological Sci- Ramon, M., Caharel, S., & Rossion, B. (2011). The speed of recogni- ence, 23, 41–45. https://doi.org/10.1177/0956797611422918. tion of personally familiar faces. Perception, 40, 437–449. https:// Kim, K., Park, K. K., & Lee, J.-H. (2014). The influence of arousal doi.org/10.1068/p6794. and expectation on eyewitness memory in a virtual environment. Riva, G. (2005). Virtual reality in psychotherapy: Review. CyberPsy- Cyberpsychology, Behavior, and Social Networking, 17, 709i713. chology and Behavior, 8, 220–230. https://doi.org/10.1089/ https://doi.org/10.1089/cyber.2013.0638. cpb.2005.8.220. klein Selle, N., Verschuere, B., Kindt, M., Meijer, E., & Ben-Shakhar, Sagana, A., Sauerland, M., & Merckelbach, H. (2014). ‘This is the G. (2017). Unraveling the roles of orienting and inhibition in person you selected’: Eyewitnesses’ blindness for their own facial the concealed information test. Psychophysiology. https://doi. recognition decisions. Applied Cognitive Psychology, 28, 753– org/10.1111/psyp.12825. 764. https://doi.org/10.1002/acp.3062. Kleinberg, B., & Verschuere, B. (2015). Memory detection 2.0: The Sagana, A., Sauerland, M., & Merckelbach, H. (2015). Eyewitnesses’ first web-based memory detection test. PLoS One, 10, e0118715. blindness for own- and other-race identification decisions. In A. https://doi.org/10.1371/journal.pone.0118715. Sagana, A blind man’s bluff: Choice blindness in eyewitness tes- Kleinberg, B., & Verschuere, B. (2016). The role of motivation to avoid timony (Doctoral dissertation) (pp. 105–120). Maastricht: Maas- detection in reaction time-based concealed information detection. tricht University. http://pub.maastrichtuniversity.nl/0ac07c43- Journal of Applied Research on Memory and Cognition, 5, 43–51. c029-4d52-8f91-c8b3b6932d39. Accessed 30 Mar 2017. https://doi.org/10.1016/j.jarmac.2015.11.004. Sauerland, M., Krix, A. C., van Kan, N., Glunz, S., & Sak, A. (2014). Leach, A.-M., Cutler, B. L., & van Wallendael, L. (2009). Line- Speaking is silver, writing is golden? The role of cognitive ups and eyewitness identification. Annual Review of Law and and social factors in written versus spoken witness accounts. Social Science, 5, 157–178. https://doi.org/10.1146/annurev. Memory and Cognition, 42, 978–992. https://doi.org/10.3758/ lawsocsci.093008.131529. s13421-014-0401-6. 1 3 1222 Psychological Research (2019) 83:1210–1222 Sauerland, M., Sagana, A., & Sporer, S. L. (2012). Assessing non- Tredoux, C. G. (1998). Statistical inference on measures of lineup fair- choosers’ eyewitness identification accuracy from photographic ness. Law and Human Behavior, 22, 217–237. https://doi.org/10 showups by using confidence and response times. Law and .1023/A:1025746220886. Human Behavior, 36, 394–403. https://doi.org/10.1037/h0093926. Tredoux, C. G. (1999). Statistical considerations when determin- Schultheis, M. T., & Rizzo, A. A. (2001). The application of virtual ing measures of lineup size and lineup bias. Applied Cognitive reality technology in rehabilitation. Rehabilitation Psychology, Psychology, 13, 9–26. https://doi.org/10.1002/(SICI)1099- 46, 296–311. https://doi.org/10.1037/0090-5550.46.3.296. 0720(199911)13:1+%3CS9:AID-ACP634%3E3.0.CO;2-1. Seymour, T. L., & Kerlin, J. R. (2008). Successful detection of ver- Verschuere, B., Ben-Shakhar, G., & Meijer, E. (2011). Memory detec- bal and visual concealed knowledge using an RT-based para- tion: Theory and application of the concealed information test. digm. Applied Cognitive Psychology, 22, 475–490. https://doi. Cambridge: Cambridge University Press. org/10.1002/acp.1375. Verschuere, B., Crombez, G., De Clercq, A., & Koster, E. H. W. (2004). Seymour, T. L., & Schumacher, E. H. (2009). Electromyographic evi- Autonomic and behavioral responding to concealed information: dence for response conflict in the exclude recognition task. Cogni- Differentiating orienting and defensive responses. Psychophysiol- tive Affective and Behavioral Neuroscience, 9, 71–82. https://doi. ogy, 41, 461–466. https://doi.org/10.1111/j.1469-8986.00167.x. org/10.3758/CABN.9.1.71. Verschuere, B., & De Houwer, J. (2011). Detecting concealed infor- Seymour, T. L., Seifert, C. M., Shafto, M. G., & Mosmann, A. L. mation in less than a second: Response latency-based measures. (2000). Using response time measures to asses “guilty knowl- Memory Detection Theory and Application of the Concealed edge”. Journal of Applied Psychology, 85, 30–37. https://doi. Information Test, 46–62. org/10.1037/0021-9010.85.1.30. Verschuere, B., & Kleinberg, B. (2015). ID-check: Online concealed Steblay, N. K., Dysart, J. E., & Wells, G. L. (2011). Seventy-two tests information test reveals true identity. Journal of Forensic Sci- of the sequential lineup superiority effect: A meta-analysis and ences, 61, S237-S240. https://doi.org/10.1111/1556-4029.12960. policy discussion. Psychology Public Policy and Law, 17, 99–139. Verschuere, B., & Kleinberg, B. (2016). Assessing autobiographical https://doi.org/10.1037/a0021650. memory: The web-based autobiographical implicit association Stormark, K. M. (2004). Skin conductance and heart-rate responses as test. Memory, 1–11. https://doi.org/10.1080/09658211.2016.11 indices of covert face recognition in preschool children. Infant and 89941. Child Development, 13, 423–433. https://doi.org/10.1002/icd.368. Verschuere, B., Kleinberg, B., & Theocharidou, K. (2015). Reaction Suchotzki, K., Crombez, G., Smulders, F. T., Meijer, E., & Ver- time-based memory detection: Item saliency effects in the sin- schuere, B. (2015). The cognitive mechanisms underlying decep- gle-probe and the multiple-probe protocol. Journal of Applied tion: An event-related potential study. International Journal Research in Memory and Cognition, 4, 59–65. https://doi. of Psychophysiology, 95, 395–405. https://doi.org/10.1016/j. org/10.1016/j.jarmac.2015.01.001. ijpsycho.2015.01.010. Visu-Petra, G., Miclea, M., & Visu-Petra, L. (2012). Reaction time- Suchotzki, K., Verschuere, B., Peth, J., Crombez, G., & Gamer, M. based detection of concealed information in relation to individual (2014). Manipulating item proportion and deception reveals differences in executive functioning. Applied Cognitive Psychol- crucial dissociation between behavioral, autonomic, and neural ogy, 26, 342–351. https://doi.org/10.1002/acp.1827. indices of concealed information. Human Brain Mapping, 36, Wells, G. L., Memon, A., & Penrod, S. D. (2006). Eyewit- 427–439. https://doi.org/10.1002/hbm.22637. ness evidence: Improving its probative value. Psychologi- Suchotzki, K., Verschuere, B., Van Bockstaele, B., Ben-Shakhar, G., & cal Science in the Public Interest, 7, 45–75. https://doi. Crombez, G. (2017). Lying takes time: A meta-analysis on reac- org/10.1111/j.1529-1006.2006.00027.x. tion time measures of deception. Psychological Bulletin. https:// Wells, G. L., Small, M., Penrod, S., Malpass, R. S., Fulero, S. M., doi.org/10.1037/bul0000087. & Brimacombe, C. E. (1998). Eyewitness identification proce- Technical Working Group for Eyewitness Evidence. (1999). Eyewit- dures: Recommendations for lineups and photospreads. Law ness evidence: A guide for law enforcement. Washington, DC: US and Human Behavior, 22, 603–647. https://doi.org/10.102 Department of Justice, Office of Justice Programs. http://ncjrs. 3/A:1025750605807. gov/pdffiles1/nij/178240.pdf. Accessed 30 Mar 2017. 1 3 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Psychological Research Springer Journals

Testing a potential alternative to traditional identification procedures: Reaction time-based concealed information test does not work for lineups with cooperative witnesses

Loading next page...
 
/lp/springer-journals/testing-a-potential-alternative-to-traditional-identification-lnz02WOx1I

References (60)

Publisher
Springer Journals
Copyright
Copyright © 2017 by The Author(s)
Subject
Psychology; Psychology Research
ISSN
0340-0727
eISSN
1430-2772
DOI
10.1007/s00426-017-0948-5
Publisher site
See Article on Publisher Site

Abstract

Direct eyewitness identification is widely used, but prone to error. We tested the validity of indirect eyewitness identifica- tion decisions using the reaction time-based concealed information test (CIT) for assessing cooperative eyewitnesses’ face memory as an alternative to traditional lineup procedures. In a series of five experiments, a total of 401 mock eyewitnesses watched one of 11 different stimulus events that depicted a breach of law. Eyewitness identifications in the CIT were derived from longer reaction times as compared to well-matched foil faces not encountered before. Across the five experiments, the weighted mean ee ff ct size d was 0.14 (95% CI 0.08–0.19). The reaction time-based CIT seems unsuited for testing cooperative eyewitnesses’ memory for faces. The careful matching of the faces required for a fair lineup or the lack of intent to deceive may have hampered the diagnosticity of the reaction time-based CIT. Introduction et al., 2008; Fitzgerald & Price, 2015; Steblay et al., 2011). Although proper lineup construction and administration Eyewitnesses’ memory for the face of a perpetrator is com- can increase accuracy rates (e.g., Brewer & Palmer, 2010), monly tested by means of an identification procedure, for the risk of false identifications remains and continues to be example, a lineup or showup. It is well established that eye- a major concern in the field. Scholars have recently ques- witnesses who are submitted to such a procedure can help tioned researchers’ sustained confinement to the traditional solving a crime by pointing out the actual perpetrator, but eyewitness identification paradigm (Brewer & Wells, 2011; it is equally well known that eyewitnesses can err. In the Wells et al., 2006). More specifically, it has been argued worst case, a wrongful identification decision can lead to that existing research may not be radical enough, with new a wrongful conviction while allowing the guilty party to procedures merely constituting adaptations of existing ones remain free and reoffend. Wrongful identifications were (Dupuis & Lindsay, 2007), rather than generating funda- involved in 70% of the wrongful convictions uncovered mentally new approaches for testing eyewitnesses’ memory by the innocence project (innocenceproject.org; cf. Kassin for faces. Existing procedures rely on explicit identification, et al., 2012; Wells et al., 1998). While identification accu - often after some deliberation. One possible source of error racy can vary widely across conditions, different meta-anal- is the constructive identification through reasoning (i.e., the yses show that, on average, accuracy for six-person lineups culprit is likely to be included and number 4 looks most like (i.e., seven answer options: all six lineup members and the him, so it must be number 4). More gross errors in explicit option to reject the lineup) revolves around 50% (e.g., Clark identification may come from uncooperative eyewitnesses that deliberately point to the wrong person (e.g., to protect someone else; being bribed; after being threatened; see * Melanie Sauerland Leach et al., 2009; Parliament & Yarmey, 2002). In other melanie.sauerland@maastrichtuniversity.nl words, explicit identification is prone to subtle biases in Section Forensic Psychology, Department of Clinical human decision making and strategic misidentification. Psychological Science, Faculty of Psychology One alternative might be to rely on indirect measures. Such and Neuroscience, Maastricht University, P.O. Box 616, responses are attractive in the sense that they may be unin- 6200 MD Maastricht, The Netherlands 2 tentional, uncontrollable, goal independent, autonomous, Department of Clinical Psychology, University purely stimulus driven, unconscious, efficient, or fast (Moors of Amsterdam, Amsterdam, The Netherlands Vol:.(1234567890) 1 3 Psychological Research (2019) 83:1210–1222 1211 & De Houwer, 2006). First evidence supporting the idea that press the YES button whenever encountering the target (a indirect measures can provide information about face recog- red Maserati) and the NO button for all other cars. For the nition comes from two studies with pre-school and school innocent examinee, all NO-reaction times will be roughly children (Newcombe & Fox, 1994; Stormark 2004). In these similar. For the guilty examinee, the blue Porsche will stand studies, participants first viewed a slide show of previously out and grab attention. Longer reaction times for the blue familiar faces (playmates or previous classmates) and unfa- Porsche as compared to the other NO-reaction times pro- miliar faces, while their skin conductance, heart rate, or vides an index of recognition. After the initial validation both were recorded. Subsequently, direct face recognition of the reaction time-based CIT (Farwell & Donchin, 1991, responses were collected. Although both direct and indirect Seymour & Kerlin, 2008; Seymour & Schumacher, 2009; measures scored above chance in both studies, the indirect Seymour et al., 2000), several recent well-powered studies measures outperformed direct recognition decisions. In the have confirmed its diagnostic efficiency (Kleinberg & Ver - current line of research, we embraced the call for exploring schuere, 2015, 2016; Verschuere et al., 2015, 2016; for a a potential adaption of the identification procedure in a ven- discussion of its boundary conditions and limitations, see ture that tested an indirect index of eyewitness identification: Verschuere et al., 2011; and Meijer et al., 2016). the concealed information test (CIT; Lykken 1959). Meijer et al. (2007) conducted two studies to examine The CIT is a well-established memory detection tech- whether the ERP-based CIT is sensitive for concealed face nique (for a comprehensive review, see Verschuere et al., recognition. In their first experiment, the CIT was capable 2011). At first, the CIT looks much like a multiple-choice of picking up recognition of the faces of siblings and close examination, presenting the examinee with the correct friends. In their second experiment, the CIT did not show answer embedded amongst a series of incorrect answers. students’ recognition of their faculty professor faces. Sey- The CIT is used when the examinee may not be able or mour and Kerlin (2008) had participants memorize a set willing to explicitly identify the correct alternative, and, of previously unknown faces, and the reaction time-based therefore, does not rely upon an explicit answer but rather CIT showed high accuracy in concealed face recognition. on more automatic responses to determine recognition. The stimuli used in these studies, however, were not typical Suppose an exclusive blue Porsche has been stolen, and the of eyewitness identification, because the correct faces were police has a suspect that denies any involvement or knowl- either very familiar or well memorized rather than inciden- edge about that theft. The suspect of the car theft could be tally encountered as in the case of eyewitnesses. In addition, asked about the stolen car: Was it ….a white Bentley?...A they were not matched in terms of their outer appearance. As green Mercedes?...A blue Porsche?...A yellow Ferrari?...A such, they would not meet requirements of a formal identifi - black Jaguar? Stronger (e.g., electrodermal) responding to cation procedure in an investigation (cf. Technical Working the actual stolen car compared to the other cars, is taken as Group for Eyewitness Evidence 1999; Wells et al., 1998). an index of recognition. When combining several questions, More specifically, Wells et al.’s (1998) rule 3 concerning the the CIT can detect concealed recognition with high validity. structure of lineups and photospreads of states: Reviewing a range of indices, varying from event-related The suspect should not stand out in the lineup or potentials (ERPs) to reaction times, Meijer et  al. (2016) photospread as being different from the distractors reported the diagnostic efficiency of the CIT (i.e., the area based on the eyewitness’s previous description of the under the curve) to be around 0.82–0.94. This means that culprit or based on other factors that would draw extra in such studies, a randomly chosen person with recognition attention to the suspect. (p. 630). has an 82–94% chance to respond stronger in the CIT than a randomly chosen person without recognition. In recent This rule is further specified with the fit-description cri- years, there is growing interest in the use of reaction times as terion which stresses the importance that distractors should the response measure in the CIT (for a review, see Suchotzki fit the eyewitness’s verbal description of the perpetrator et al., 2017). Response times can be administered and ana- (Technical Working Group for Eyewitness Evidence 1999; lyzed cost and time efficient, requiring a single computer. In Wells et al., 1998). Thus, when the eyewitness describes the the reaction time-based-CIT, the answer alternatives are pre- perpetrator as ‘young, white female, blond hair’, the lineup sented briefly, one by one, on the computer screen. To assure should consist of young white females with blond hair. attention to the stimuli, the examinee engages in a binary Lefebvre et al. (2007) were the first to propose the CIT classification task, pressing a unique button for a set of stim- for the purpose of eyewitness identification, namely, to use uli learned just before the test (i.e., the targets) and another incidentally encountered faces, and to match faces follow- button for all other stimuli (including the correct answer or ing guidelines for eyewitness identification. Participants probe, as well as all foils, called irrelevants). Building on watched four mock crimes across two testing sessions. In the example above, the examinee may be explained that the the perpetrator-present conditions, participants were pre- CIT will examine recognition of the stolen car, and asked to sented with the photograph of the perpetrator, the victim, 1 3 1212 Psychological Research (2019) 83:1210–1222 and five foils, one by one, on the computer screen, while CIT was presented first, to obtain CIT performance that electrophysiological recordings were made. Deviating from was unimpaired by participants’ lineup decision. The use of the classic CIT procedure, participants could respond to only one stimulus film in Experiment 1 raised the question each picture by pressing one of three buttons, indicat- whether diverging findings could be attributed to certain ing that this picture depicted the perpetrator, the victim, roles the film featured (i.e., more attention paid to the thief or another person. In other words, participants made an than a bystander) or characteristics of certain actors (e.g., explicit identification in this ERP-based CIT. The CIT higher or lower distinctiveness). Therefore, we used differ - revealed recognition of the perpetrator, and so did explicit ent stimulus film versions for all subsequent experiments identification. While the results point to the potential of the in which actors switched roles across versions, while the CIT for cooperative eyewitness identification, the electro- plot was identical. physiological index of recognition may have been evoked Following null findings and contradictory results in by the explicit identification. In a second ERP-based CIT Experiments 1 and 2, and emerging insights into the valid- study (Lefebvre et al., 2009), the effects were replicated, ity of the reaction time-based CIT, we realized that we may but also extended by examining the role of active conceal- have used a suboptimal CIT protocol. Indeed, Verschuere ment. In the deceptive condition, participants concealed et al. (2015) showed that using a separate CIT per probe the identity of the perpetrator from the experimenters by (i.e., one for victim, one for thief, etc.) reduced accuracy pressing the button that corresponded with an innocent and that it is recommended to use one CIT that presents all individual, rather than perpetrator. Results confirmed the items completely intermixed (see also Lukasz et al., 2017). earlier finding, showing that even when trying to conceal In Experiments 3–5, we, therefore, administered such a their knowledge, the CIT revealed recognition of the per- multiple-probe CIT, in which all probes, that is, all actors petrator’s face. that appeared in the stimulus event and all corresponding Taken together, there is preliminary evidence that the irrelevant items were presented in random order. Following ERP-based CIT may be useful for testing the facial memory small effect sizes in Experiment 3, we considered the possi- of cooperative eyewitnesses. In the present research line, bility that our stimulus films had not allowed for sufficient we examined whether the findings extend to the reaction encoding of the actors’ faces. We, therefore, prepared a time-based CIT, which is much easier to apply. This was less complex stimulus film with only two actors and opti- tested in a series of five experiments. We expected that the mal viewing conditions (long facial viewing time, includ- recognition of a face previously encountered in a stimulus ing close-ups, for both actors) for Experiment 4. Indeed, event (probes) would be reflected in longer reaction times, small but significant effects materialized for the two actors compared to reaction times for irrelevants. (thief and victim) in this experiment. The final experiment (Experiment 5) additionally addressed three issues: for one, Experiment 5 included an additional practice block Overview of the studies and a minimum proportion of accurate reactions during practice before a participant could move on to the actual Participants witnessed a crime involving one or more indi- CIT. Second, a virtual reality event was used instead of a viduals. The subsequent reaction time-based CIT assessed real life film, to be able to better control the actions and face recognition of the individuals involved in the crime. exposure of the subjects featured in the mock crime and to Using the classic CIT procedure, participants pressed one offer participants a more realistic experience of the mock specific key for all stimuli (i.e., irrelevants and probes), crime (cf. Gorini et al., 2007; Kim et al., 2014; Riva 2005; except for the target stimulus that was memorized prior Schultheis & Rizzo, 2001). Finally, we included two con- to the CIT. The progression of the five conducted experi- trol objects in the stimulus event. Finding an effect for the ments can be described as follows: in Experiment 1, one objects but not the faces would replicate earlier findings stimulus film depicting four actors who played a thief, a concerning objects (e.g., Suchotzki et al., 2014; Verschuere victim, and two bystanders was used. The lineup referring et al., 2004; Visu-Petra et al., 2012), showing the validity to each actor was presented prior to the referring CIT to of the CIT for objects and strengthen the conclusion of receive a lineup performance measure that was unimpaired the absence of an effect for lineup faces. Anticipating the by CIT presentation. In all subsequent experiments, the results, we found a CIT effect for objects, but not lineup faces. Comparison of the methodology in the current stud- ies and CIT research in memory detection in suspects Note that unlike common terminology in eyewitness identification opens new perspectives on when reaction time-based CIT studies, the target does not denote the person seen during the stimu- serve as a useful tool to diagnose face recognition in coop- lus event. Rather, this individual is dubbed probe, whereas the term erative eyewitnesses. target describes the person to whom the participant has to react differ - ently in the CIT. 1 3 Psychological Research (2019) 83:1210–1222 1213 Experiments 2 and 3 For these studies, four different stimu- Method lus film versions depicting the theft of a purse in a bar were used. Across film versions, the four female actors switched Data are publically available using the following link: the roles of the thief, accomplice, victim, and a bystander, http://hdl.handle.net/10411/2MUUTT. while the plots were identical. This was to avoid possible confounding effects of actor and role. For example, if only Participants one film version is used, it is unclear whether an effect might be attributable to the characteristics of a particular person In total, 436 participants were tested, 35 of whom were (i.e., distinctive features) or role (e.g., more attention paid excluded. More specifically, these participants did not to the thief than a bystander). All versions lasted approxi- press the accurate key (i.e., left shift key for targets, right mately 3:20  min. A detailed description can be found in shift key for irrelevants, and probes or vice versa) in 50% Sauerland et al. (2014). of the trials on one response category (i.e., responses to probes, targets, or irrelevants; following Lukasz et  al., Experiment 4 Two film versions depicting the theft of a cell 2017; Kleinberg & Verschuere, 2015, 2016; Verschuere phone, involving a thief and a victim, were created (duration & Kleinberg, 2016; Verschuere et al., 2015). The numbers 1:13 min). Analogous to Experiments 2 and 3, the two female of included participants in Experiments 1–5 were 55, 107, actors switched the roles of the thief and victim across film 84, 75, and 80, respectively (N = 401; 299 women and 102 versions. The action can be described as follows: a young men, M 21.44 years, SD 2.48). Participants were mostly age woman (i.e., the subsequent thief) rushes from a cafeteria to Bachelor (88.0%) or Master students (9.7%) who studied the train station when she runs into another young woman at the Faculties of Psychology and Neuroscience (80.9%), (i.e., the subsequent victim), resulting in both of their bags Health, Medicine and Life Sciences (6.6%), the School falling on the ground. While the thief yells angrily at the of Business and Economics (4.3%), or other (8.2%). The victim, both pick up their bags and their contents that had most common native languages were German (46.5%) and fallen; then they walk away. When the victim searches her Dutch (31.2%; for Experiments 2–5; native language was phone in her bag, she cannot find it and runs after the thief. not assessed in Experiment 1). Participants received study The thief is seen running towards the train holding the vic- credit or a gift voucher in return for their participation. tim’s phone in her hand. The research line was approved by the research board of the faculty. Experiment 5 This experiment used a virtual reality event as stimulus event. This allowed for more control over the actions and exposure of the individuals and objects. In this Design 1:05 min event, two young women walk through a lighted city street at night. One (woman 1) plays music on her A within-subjects factorial design contrasting reaction phone, while the other one (woman 2) drops a coke bottle times to probes vs. irrelevant faces was employed for all (object 1). Woman 1 walks up to a parked car and jumps experiments. on the motor hood to dance. Across the street, the observer sees a building with a neon Casino sign (object 2). Woman Materials 2 dances next to the car and later kicks off one of its side mirrors. When a car drives around the corner, both women Stimulus events run away. The faces of both women can be seen for most of the duration of the film. Each of the two roles could be Four different stimulus events were used. They depicted a played by two avatars each, resulting in four identical event theft (Experiments 1–4) or the vandalism of a car (Experi- versions with the avatar constellations AB, Ab, aB, and ab. ment 5) and included one or two perpetrators, a victim, and sometimes one or two bystanders. The number of actors Reaction Time-Based Concealed Information Test involved in the events was either four (Experiments 1–3) or two (Experiments 4 and 5). In the beginning of the CIT, participants are instructed to press the right shift key as fast as possible in response to Experiment 1 The first stimulus film involved four actors a facial stimulus, with one exception, the target. For this (thief, victim, two bystanders) and depicted the theft of a stimulus, they should press the left shift key rather than the wallet in a student cafeteria (duration: 5:05 min). A detailed right one. Participants are then presented with the target for description can be found in Sauerland, Sagana, and Sporer 30 s, accompanied by instructions to encode this face. In (2012). a practice block, participants were provided with feedback 1 3 1214 Psychological Research (2019) 83:1210–1222 Table 1 Methodological specifics of five experiments Experiment 1 Experiment 2 Experiment 3 Experiment 4 Experiment 5 N 55 107 84 75 80 Cover story Yes Yes Yes No No Stimulus event Staged mock video Staged mock video Staged mock video Staged mock video Virtual reality event Duration of event 5:05 3:20 3:20 1:13 1:05 Event versions 1 4 4 (same as Exp 2) 2 4 Number of actors 4 4 4 2 4 Number of roles in stimulus event 4 4 4 2 2 Number of objects included 0 0 0 0 2 Number of practice blocks 4 (1 per role) 4 (1 per role) 1 1 2 CITs 4 (1 per role) 4 (1 per role) 1 1 1 CIT test protocol 1 person 1 person Multiple persons Multiple persons Multiple persons Number of stimulus presentations 20 20 21 21 21 (blocks) Number of trials 560 (140 per CIT) 560 (140 per CIT) 588 294 588 Lineup presentation Before each CIT After each CIT After CIT After CIT After CIT (good, wrong, or too slow). All CIT stimuli were shown four probes, four targets, and 4 × 5 = 20 irrelevants. The twice and participants were given 1500 ms to react before CIT of Experiment 4 included two probes, two targets, and the next stimulus was shown following an inter-stimulus 2 × 5 = 10 irrelevants. The CIT of Experiment 5 included two interval of 1000 ms. The size of the facial stimuli was 260 facial probes and two object probes, two facial targets and pixels × approximately 220 pixels. In Experiment 5, two two object targets, 2 × 5 = 10 facial irrelevants, and 2 × 5 = 10 practice blocks (rather than one) were conducted, with an object irrelevants. optional third one if participants had more than 50% errors or misses in the second practice block. This served to decrease CIT and lineup photos the number of wrongful responses and subsequent exclusion experienced in the former experiments. Following the prac- Facial pictures For taking the facial pictures of probes, tar- tice block, the experimenter left the room and the actual task gets, and irrelevants, individuals took jewelry, eyeglasses, began. Every stimulus was presented 21 times (Experiments and hair accessories off and wore their hair loose. The cloth- 1 and 2: 20 times) with presentations in random order, result- ing of each person differed from one another and the probes ing in 294 to 588 trials. In Experiments 4 and 5, the question additionally wore different clothing in the film than in the “Do you recognize this?” above every stimulus and the labels photograph. The photographs were taken against a white “YES” and “NO” on the left and right sides of the screen wall and edited to display a person from the collarbone up. were added. This was to increase the difficulty of inhibit- The selected pictures fit the general description of the actors ing a left (YES) response, while the participant actually did depicted in the different stimulus events (i.e., the probes). recognize the face. In this phase, no feedback was given. More specifically, for each actor, six matching pictures were The use of the left vs. right shift key was counterbalanced selected. One of these pictures was selected to serve as tar- across participants. The methodological specic fi s of the CITs get and the remaining five pictures served as irrelevants in of each experiment are summarized in Table 1. the CIT. In Experiments 1 and 3, one target was pre-selected In Experiments 1 and 2, the CIT stimuli presented at random for all participants, whereas in the other experi- included one probe (i.e., the face of one of the persons seen ments, a target was randomly selected for each participant. in the stimulus film), a target (the face participants were For the virtual reality event, seven avatars that matched in instructed to encode at the beginning of the CIT), and their general person description were created for each of the five irrelevants (i.e., foils). Participants were successively two roles (i.e., 14 avatars). Two avatars each were selected presented with four different CITs, one for each probe. In to appear in the different stimulus event versions (i.e., four Experiments 3–5, only one CIT was administered, which avatars). Analogous to Experiments 1–4, the avatars from included multiple probes, namely, all of the individuals the stimulus event served as probes, one avatar each served that they had seen in the stimulus film. For Experiment 3, as target, and the remaining avatars served as irrelevants. this means that the pictures presented in the CIT included 1 3 Psychological Research (2019) 83:1210–1222 1215 Object pictures Fourteen object pictures were created for and gave the participants the chance to get used to being in a Experiment 5. The pictures of a coke bottle and a casino virtual reality environment. Then, the CIT task was started. sign were expected to be salient stimuli in the stimulus The final part of the experiment was the administration of event (Kleinberg & Verschuere, 2015; Lieblich et al., 1976). the lineups; one for each person or avatar that appeared in Six additional objects falling into the categories consumer the stimulus event. Deviating from the described procedure, goods (hamburger, pack of cigarettes, can of beer, chocolate in Experiment 1, each of the four CITs was preceded by the bar, bag of French fries, and bottle of Whiskey) and façade matching lineup, whereas in Experiment 2, each of the four decoration (Hotel sign, Advent wreath, Dutch flag, Art show lineups was preceded by the matching CIT. In Experiment 1, sign, carnival garland, and occupation banner reading “This only actor-present lineups were used; in Experiments 2 and is ours”) were created to serve as targets and irrelevants 3, the thief and bystander 1 lineup were either both present (foils). The objects that served as targets/irrelevants were or both absent, as were the victim and bystander 2 lineup randomly selected for each participant. (i.e., two lineups were always absent, and two were present); and in Experiments 4 and 5, actor presence was completely Lineups and lineup construction counterbalanced. In Experiment 1, the sequence of the line- ups was fixed (Thief-Victim-Bystander 1-Bystander 2); in The facial pictures described above were used to construct the Experiments 2 and 3, we used a Latin square (Thief-Victim- actor-present and actor-absent lineups. Lineups were com- Bystander 1-Bystander 2 vs. Victim-Bystander 1-Bystander posed of six photographs numbered 1–6 that were arranged 2-Thief vs. Bystander 1-Bystander 2-Thief-Victim etc.); in in two rows of three pictures (i.e., a simultaneous lineup). All Experiment 4, lineup order (thief-victim vs. victim-thief) distractors and the replacement (i.e., extra distractor added was counterbalanced; and in Experiment 5, lineup order was to actor-absent lineups) fitted the general descriptions of random. Testing sessions lasted approximately 30–40 min. the referring actor, as determined by presenting independ- The debriefing followed after termination of data collection. ent samples of mock witnesses (ns between 20 and 31) who A summary of the procedural specifics of the CITs of each had not viewed the stimulus event with a description of each experiment can be found in Table 1. actor together with the referring lineup (e.g., ‘She is about 20 years old. She has long, brown hair. She has a slim to normal figure’). These mock witnesses were then asked to Results select the person from the lineup who matched the descrip- tion best (Doob & Kirshenbaum, 1973). Effective lineup CIT data preparation and overview of analyses sizes for actor-present and actor-absent lineups, determined as Tredoux’s Es, were satisfactory and ranged from 3.2 to 5.6 Prior to data analyses, those trials with wrongful responses of a possible 6 (M 4.2; Tredoux 1998, 1999). and reaction times faster than 150 ms (i.e., inattentive respond- ing) or slower than 1500 ms were removed from the data set. Procedure Next, data were aggregated to result in the average reaction times per stimulus type per participant and probe (e.g., for Participants signed the informed consent form and provided Experiment 1, 2 × 4 variables would be computed: the mean demographic data. Before watching the stimulus event, par- reaction times to the probes and irrelevants, referring to the ticipants in Experiments 1–3 were instructed to pay close thief, victim, bystander 1, and bystander 2). For each experi- attention to the film, because they would be asked ques- ment, a paired sample t test contrasting probes vs. irrelevants tions about it later on. In Experiments 4 and 5, participants was computed per role. Finally, a weighted mean estimate of were instructed to pay particular attention to the faces and to the effect size across all five studies was established. encode them as detailed as they could. In Experiment 5, par- ticipants were given additional instructions about the use of Following previous work in the field, we had initially also removed the virtual reality goggles and handed headphones once they all response times slower than 800 ms to account for possible inatten- had put on the goggles. They then first saw an orientation tive responding or strategic slowing (Kleinberg & Verschuere 2015; Noordraven & Verschuere 2013; Verschuere & Kleinberg 2015). Fol- environment, which consisted of a big open space, and which lowing the advice of reviewer Laura Visu-Petra, we reanalyzed the allowed them to check if the goggles were placed correctly data with a longer deadline of 1500  ms (cf. Seymour et  al. 2000). Unexpectedly this led to better results, possibly because processing The literature commonly refers to target-present and target-absent time for faces can often be longer than 800 ms (Ramon et al. 2011), lineups. This terminology interferes with the CIT terminology, and has been shown to be longer than for words (e.g., Ovaysikia, though, in which the probe is the person previously seen, not the tar- Tahir, Chan, & DeSouza, 2011) which served as stimuli in previous get. Therefore, we refer to actor-present and actor-absent lineups in CIT studies. We therefore report the findings of the latter analyses. To this article when referring to lineups that do or do not include the per- enable comparison with previous studies using the 800  ms deadline, son participants saw in the stimulus event. these findings are reported in Table 4 in Appendix. 1 3 1216 Psychological Research (2019) 83:1210–1222 Table 2 Reaction times, standard deviations, and inferential statistics for the pairwise comparisons of the reaction times for probes and irrelevant stimuli (including reaction times 150–1500 ms) Study Role Played by actor df t d p Mean response time in ms (SD) Probes Irrelevants 1 Thief A 54 − 0.51 − 0.07 .610 421 (76) 424 (69) Victim B 54 3.72 0.50 < .001 466 (84) 444 (64) Bystander 1 C 54 0.68 0.05 .499 426 (84) 423 (68) Bystander 2 D 54 6.17 0.83 < .001 454 (74) 419 (64) 2 Thief EFGH 106 − 0.11 − 0.01 .913 374 (70) 374 (58) Victim EFGH 106 − 2.16 − 0.21 .033 369 (64) 376 (58) Bystander 1 EFGH 106 − 1.41 − 0.14 .161 367 (61) 371 (53) Bystander 2 EFGH 106 − 0.34 − 0.03 .734 370 (63) 371 (56) 3 Thief EFGH 83 1.70 0.19 .093 494 (76) 485 (55) Victim EFGH 83 2.16 0.24 .034 497 (76) 484 (53) Bystander 1 EFGH 83 0.96 0.10 .341 486 (72) 481 (57) Bystander 2 EFGH 83 2.09 0.23 .040 497 (81) 484 (54) 4 Thief IJ 74 2.48 0.29 .015 479 (64) 466 (51) Victim IJ 74 2.12 0.25 .037 479 (61) 469 (55) 5 Woman 1 KL 79 − 0.99 − 0.11 .324 545 (81) 551 (68) Woman 2 MN 79 2.23 0.25 .029 535 (77) 521 (62) All 5 studies Across roles 0.14 (0.08, 0.19) Experiments 2–5 Across roles 0.10 (0.05, 0.16) studies, a very small effect size materialized. We reran the Comparison of reaction times to probe meta-analysis excluding Experiment 1. This was to account and irrelevants for the fact that in this experiment, the CIT outcome may have been impacted by the preceding lineup task, a pro- Across the five experiments, we conducted 16 tests to com- cedural detail that may be sufficient to create a deviant pare probe vs. irrelevant reaction times. Eight of the tests response in the subsequent CIT. The yielded average effect showed no effect (|d | ≤ 0.20), and five tests displayed small size across Experiments 2–5 was 0.10 (95% confidence effects into the expected direction; one a moderate and one interval: 0.05; 0.16), a very small effect. a large effect into the expected direction. One test showed a In addition, we reran the meta-analyses including only small effect in the opposite direction. Table  2 provides the those participants who correctly identified the actor from mean reaction times (and SDs) in response to facial probes an actor-present lineup. The results showed a small aver- and irrelevants and the inferential statistics. age effect size when looking at all five experiments [mean In Experiment 5, two control objects were included in d = 0.38 (95% confidence interval: 0.24; 0.52)], whereas the CIT. Replicating earlier findings, the reaction times for there was a very small effect, on average, when Experiment probes were slower (M 454 ms, SD 58; M 451 ms, O1 O1 O2 1 was excluded [mean d = 0.15 (95% confidence interval SD 48; M 453 ms, SD 47) than for irrel- O2 collapsed collapsed − 0.06; 0.36)]. evant stimuli (M 442 ms, SD 40; M 440 ms, SD O1 O1 O2 O2 38; M 441 ms, SD 38), t(79) = 2.63, p = .010, collapsed collapsed Eyewitness identification performance d = 0.29 (object 1), t(79) = 3.50, p = .001, d = 0.39 (object 2), from traditional lineups t(79) = 4.26, p < .001, d = 0.48 (collapsed). Table  3 shows the identification accuracy rates split by Meta‑analysis across five studies experiments and probes. The data concerning Experiments 2–5 must be treated with caution. This is because in these The five studies together yielded 16 effect size estimates. experiments, the lineup task was preceded by the CIT task. Using the reciprocal of the sampling variances as weights This familiarizes participants with the stimuli presented (cf. Gibbons et al., 1993), a weighted mean estimate of the in the subsequent lineup and possibly introduces uncon- effect size yielded an average effect size of 0.14 (95% con- scious transference effects (Deffenbacher et al., 2006). As fidence interval: 0.08; 0.19), indicating that across the five 1 3 Psychological Research (2019) 83:1210–1222 1217 1 3 Table 3 Identification accuracy rates for different roles across five experiments Role Identification accuracy (%) Proportion of do not know responses (%) Experiment 1  Thief 60.4 12.7  Victim 92.2 7.3  Bystander 1 73.5 10.9  Bystander 2 18.2 20.0  Across roles 62.5 12.7 Experiment 2  Thief 44.0 29.9  Victim 34.7 32.7  Bystander 1 50.0 34.6  Bystander 2 34.7 32.7  Across roles 40.8 32.5 Experiment 3  Thief 30.7 26.2  Victim 37.8 46.4  Bystander 1 35.7 33.3  Bystander 2 39.6 36.9  Across roles 35.6 35.7 Experiment 4  Thief 69.2 13.3  Victim 63.1 13.3  Across roles 66.2 13.3 Experiment 5  Woman 1 26.7 25.0  Woman 2 19.6 30.0  Object A 67.9 30.0  Object B 59.0 51.3  Across roles and stimuli 41.7 34.1 The data concerning Experiments 2–5 must be treated with caution, because the CIT task preceded the lineup task. This familiarizes participants with the stimuli presented in the subsequent lineup and possibly introduces unconscious transference effects (Deffenbacher et al., 2006). As a consequence, the identification task may have been quite difficult Calculations of identification accuracy include positive and negative identification decisions made, but exclude do not know responses (i.e., number of accurate responses divided by number of accurate + inaccurate responses) 1218 Psychological Research (2019) 83:1210–1222 a consequence, the identification task may have been quite in the previous experiments using the same stimulus film difficult. This decision was made to avoid contamination (Sagana et al., 2015, 2014, Experiments 2a–c, 3; Sauer- of the CIT outcomes, which was the focus of this line of land et al., 2012). In addition, results from a previous study research. The results support this notion. In Experiments deem the likelihood that the stimulus persons used in our 2, 3, and 5, identification accuracy was somewhat lower experiments were particularly difficult to encode unlikely. (around 40%), compared to Experiment 1, where the iden- Specifically, the films used in Experiments 2 and 3 served tification measure was not challenged by a preceding CIT as stimulus materials in a study looking at eyewitnesses’ (63% accuracy on average). Furthermore, the proportion of memory reports (Sauerland et al., 2014). Collapsed across don’t know answers, which can be taken as an indication of different recall conditions, participants reported on average, difficulty of the task, was higher in Experiments 2, 3, and 5 about 53 person details (i.e., details referring to the appear- (33–36%), compared to Experiment 1 (13%). Experiment 4 ance of the individuals shown in the film, including facial constitutes an outlier in the sense that identification accuracy details, description of clothing, build etc.), of which, on rates were equally high (or, if anything, even higher: 66%) average, 73% were accurate. Together, these findings do not and do not know responses equally low (13%) as in Experi- seem to support the notion that it was particularly dic ffi ult to ment 1. This might be the result of our attempts to create a encode the actors shown in our stimulus films. Finally, while less complex stimulus film with only two actors and optimal rerunning our meta-analyses including only participants who viewing conditions, making the identification less difficult, correctly identified the actor from an actor-present lineup compared to all other experiments. increased the average effect size, this increase was carried by Experiment 1. It appears that viewing the lineup prior to the CIT—which was only the case in Experiment 1— Discussion improved CIT performance. Accordingly, it seems most appropriate to consider the average effect sizes excluding It was the aim of the current line of research to test an alter- Experiment 1 as true effect of the reaction time-based CIT. native to traditional, explicit lineup identification for test- These effect sizes were very small (including all partici - ing cooperative eyewitnesses’ memory for faces, using an pants from Experiments 2–5: d = 0.05; including only par- indirect measure of face recognition. To this end, we trans- ticipants who accurately identified the actor from the lineup ferred the reaction time-based CIT methodology that is well in Experiments 2–5: d = 0.15), regardless of accurate actor established in the field of memory detection in suspects to identification. This confirms our conclusion that a reaction the field of eyewitness identification. The idea that reaction time-based CIT does not work for lineups, even if explicit times in a CIT task should be greater for faces that were recognition occurred. previously encountered in a stimulus event as compared to Second, a more likely explanation for our findings irrelevant foils was tested in a series of five experiments. concerns the careful matching of the employed faces, as The methodology of the studies sequentially progressed required by eyewitness identification procedural guidelines and addressed possible explanations for non-significant and (e.g., Wells et al., 1998). Lineup pictures were deliberately inconsistent findings. Across 16 reaction time comparisons, selected to match the general description of the probes, lead- seven were in favor of reaction time-based CIT predictions, ing to matched hair color and length, body type, and age. whereas half of the tests returned no significant effects and In fact, during debriefing, many participants spontaneously one effect was opposite to our expectations. A meta-analy - commented on the resemblance of the different stimulus sis showed that the overall effect size was very small. Our faces. While the selection of individuals that match in their findings do not support the use of the reaction time-based general description is a necessity in lineup construction, it CIT for testing cooperative eyewitnesses’ facial recognition might be obstructive for the CIT. Indeed, it was found that memory. These findings contrast with the finding that the the more the irrelevants resemble the probe, the smaller the ERP-based CIT may be useful for eyewitness identification CIT effect (Ben-Shakhar & Gati, 1987). This may explain (Lefebvre et al., 2007, 2009). At least three explanations why Seymour and Kerlin (2008; see also Meijer et al., 2007) need to be considered for this apparent discrepancy. did find the reaction time-based CIT to be responsive to First, it is possible that the stimulus event did not allow face recognition. They selected their facial stimuli from the for sufficiently deep encoding of the faces. We think that this Aberdeen Psychological Image Collection, which contains explanation is unlikely, because the considerable identifica - pictures of 116 people that have not been selected to match tion accuracy rates in Experiment 1—where the lineups were any criteria. While Lefebvre et al.’s facial stimuli (2007, presented prior to the CIT—are in line with accuracy rates 2009) were matched for some attributes, such as gender, age, reported in the literature (e.g., Clark et al., 2008; Fitzgerald race, and hair length, no information was given about other & Price 2015; Steblay et al., 2011) and with those reported features such as hair color or hair style, and no measures 1 3 Psychological Research (2019) 83:1210–1222 1219 of effective lineup size were provided. Thus, it is possible To summarize, the results of the presented five experi- that the conditions for creating a fair lineup and creating an ments indicate that the reaction time-based CIT is not a effective CIT are mutually exclusive. This notion was also valid means of testing facial recognition in cooperative confirmed by our findings referring to objects in Experiment eyewitnesses with matched faces. The findings indicate 5. Here, the expected CIT effect was found. The fact that the that it is important to map how stimulus distinctiveness crime-related objects (e.g., Hotel sign) were quite distinct affects the validity of the reaction time-based CIT. from the irrelevant foils (e.g., Advent wreath, Dutch flag, Acknowledgements We thank Nick J. Broers for running the meta- Art show sign, carnival garland, occupation banner reading analyses; Chaim Kuhnreich, Lena Leven, and Lisa Lewandowski for “This is ours”) may have contributed to the CIT effect for their help in collecting the data, Katerina Georgiadou for preparing the the objects. One way to test this idea would be by conducting data for analyses, Jacco Ronner for creating the Presentation tasks, and a study with closely matched objects or with non-matched Richard Benning for creating the virtual reality event. faces. Third, our findings are in line with the emerging idea Compliance with ethical standards that different psychological processes may underlie the Conflict of interest All authors declare that they have no conflict of reaction time-based CIT and the ERP-based CIT (klein interest. Selle et al., 2017). Lefebvre et al. (2007, 2009) provided evidence that the ERP-based CIT is sensitive to face rec- Ethical standards All procedures performed in studies involving human participants were in accordance with the ethical standards of the ognition, independent of active concealment attempts. institutional research committee and with the 1964 Helsinki declaration Our series of studies points to the possibility that the reac- and its later amendments or comparable ethical standards. tion time-based CIT critically depends on active conceal- ment, explaining our observed null effects in cooperative Informed consent Informed consent was obtained from all individual participants included in the study. witnesses. This reasoning is supported by Suchotzki et al. (2015) who suggested that the reaction times increase to probes reflects response inhibition (see also Seymour Open Access This article is distributed under the terms of the Creative & Schumacher, 2009; Verschuere & De Houwer, 2011). Commons Attribution 4.0 International License (http://creativecom- mons.org/licenses/by/4.0/), which permits unrestricted use, distribu- Suchotzki et al. (2015) observed a reaction time-based tion, and reproduction in any medium, provided you give appropriate CIT effect only when mock crime participants attempted credit to the original author(s) and the source, provide a link to the to hide crime knowledge, but not when admitting crime Creative Commons license, and indicate if changes were made. knowledge. Thus, it is possible that stronger forms of active deception may be crucial for obtaining the reaction time-based CIT effect, than achieved here. This leads to Appendix A the intriguing possibility that (1) CIT measures that do not depend on active deception—electrodermal respond- See Table 4. ing and the P300 ERP may be effective in both coopera- tive and non-cooperative eyewitnesses and (2) the reac- tion time-based CIT may be effective in non-cooperative (i.e., deceptive) eyewitnesses (cf. Lefebvre et al., 2009). We thank an anonymous reviewer for bringing this point to our attention. 1 3 1220 Psychological Research (2019) 83:1210–1222 1 3 Table 4 Reaction times, standard deviations, and inferential statistics for the pairwise comparisons of the reaction times for probes and irrelevant stimuli (including reaction times 150–800 ms) Study Role Played by actor df t d p Mean response time in ms (SD) Probes Irrelevants 1 Thief A 51 − 2.78 − 0.38 .008 397 (46) 408 (52) Victim B 51 3.93 0.54 < .001 445 (64) 427 (55) Bystander 1 C 51 − 0.90 − 0.12 .371 408 (66) 410 (57) Bystander 2 D 51 7.73 1.07 < .001 443 (65) 409 (54) 2 Thief EFGH 106 − 0.36 − 0.03 .721 368 (63) 369 (53) Victim EFGH 106 − 2.30 − 0.22 .023 364 (57) 371 (52) Bystander 1 EFGH 106 − 1.22 − 0.12 .227 364 (58) 367 (50) Bystander 2 EFGH 106 − 0.55 − 0.05 .582 365 (57) 366 (51) 3 Thief EFGH 74 0.16 0.02 .870 473 (47) 473 (45) Victim EFGH 74 1.81 0.21 .075 478 (50) 471 (44) Bystander 1 EFGH 74 0.19 0.02 .850 468 (54) 467 (45) Bystander 2 EFGH 74 1.14 0.13 .259 478 (60) 472 (47) 4 Thief IJ 74 2.08 0.24 .041 467 (55) 459 (47) Victim IJ 74 2.21 0.26 .030 468 (56) 460 (50) 5 Woman 1 KL 76 − 0.33 − 0.04 .746 519 (57) 520 (50) Woman 2 MN 76 1.71 0.19 .091 507 (51) 501 (49) All 5 studies Across roles 0.07 (0.02, 0.13) Experiments 2–5 Across roles 0.05 (− 0.00, 0.11) Psychological Research (2019) 83:1210–1222 1221 Lefebvre, C. D., Marchand, Y., Smith, S. M., & Connolly, J. F. (2007). References Determining eyewitness identification accuracy using event- related brain potentials (ERPs). Psychophysiology, 44, 894–904. Ben-Shakhar, G., & Gati, I. (1987). Common and distinctive features of https://doi.org/10.1111/j.1469-8986.2007.00566.x. verbal and pictorial stimuli as determinants of psychophysiologi- Lefebvre, C. D., Marchand, Y., Smith, S. M., & Connolly, J. F. (2009). cal responsivity. Journal of Experimental Psychology General, Use of event-related brain potentials (ERPs) to assess eyewitness 116, 91–105. https://doi.org/10.1037/0096-3445.116.2.91. accuracy and deception. International Journal of Psychophysiol- Brewer, N., & Palmer, M. A. (2010). Eyewitness identification tests. ogy, 73, 218–225. https://doi.org/10.1016/j.ijpsycho.2009.03.003. Legal and Criminological Psychology, 15, 77–96. https://doi.org Lieblich, I., Ben-Shakhar, G., & Kugelmass, S. (1976). Valid- /10.1348/135532509x414765. ity of the guilty knowledge technique in a prisoner’s sam- Brewer, N., & Wells, G. L. (2011). Eyewitness identification. Cur - ple. Journal of Applied Psychology, 61, 89–93. https://doi. rent Directions in Psychological Science, 20, 24–27. https://doi. org/10.1037/0021-9010.61.1.89. org/10.1177/0963721410389169. Lukasz, G., Kleinberg, B., & Verschuere, B. (2017). Familiarity-related Clark, S. E., Howell, R. T., & Davey, S. L. (2008). Regularities in eye- filler trials increase the validity of the reaction times-based con- witness identification. Law and Human Behavior, 32, 187–218. cealed information test. Journal of Applied Research in Memory https://doi.org/10.1007/s10979-006-9082-4. and Cognition. Deffenbacher, K. A., Bornstein, B. H., & Penrod, S. D. (2006). Mug- Lykken, D. T. (1959). The GSR in the detection of guilt. Journal shot exposure effects: Retroactive interference, mugshot com- of Applied Psychology, 43, 385–388. https://doi.org/10.1037/ mitment, source confusion, and unconscious transference. Law h0046060. and Human Behavior, 30, 287–307. https://doi.org/10.1007/ Meijer, E. H., Smulders, F. T. Y., Merckelbach, H. L. G. J., & Wolf, A. s10979-006-9008-1. G. (2007). The P300 is sensitive to concealed face recognition. Doob, A. N., & Kirshenbaum, H. M. (1973). Bias in police lineups- International Journal of Psychophysiology, 66, 231–237. https:// partial remembering. Journal of Police Science and Administra- doi.org/10.1016/j.ijpsycho.2007.08.001. tion, 1, 287–293. Meijer, E. H., Verschuere, B., Gamer, M., Merckelbach, H., & Ben- Dupuis, P. R., & Lindsay, R. C. L. (2007). Radical alternatives to tra- Shakhar, G. (2016). Deception detection with behavioral, auto- ditional lineups. In R. C. L. Lindsay, D. F. Ross, J. D. Read & M. nomic, and neural measures: Conceptual and methodological con- P. Toglia (Eds.), The handbook of eyewitness psychology, Vol II: siderations that warrant modesty. Psychophysiology, 53, 593–604. Memory for people (pp. 179–200). Mahwah: Lawrence Erlbaum https://doi.org/10.1111/psyp.12609. Associates Publishers. Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and Farwell, L. A., & Donchin, E. (1991). The truth will out: Inter- conceptual analysis. Psychological Bulletin, 132, 297. https://doi. rogative polygraphy (“lie detection”) with event-related org/10.1037/0033-2909.132.2.297. brain potentials. Psychophysiology, 28, 531–547. https://doi. Newcombe, N., & Fox, N. A. (1994). Infantile amnesia: Through org/10.1111/j.1469-8986.1991.tb01990.x. a glass darkly. Child Development, 65, 31–40. https://doi. Fitzgerald, R. J., & Price, H. L. (2015). Eyewitness identification across org/10.1111/j.1467-8624.1994.tb00732.x. the life span: A meta-analysis of age differences. Psychological Noordraven, E., & Verschuere, B. (2013). Predicting the sensitivity of Bulletin, 141, 1228–1265. https://doi.org/10.1037/bul0000013. the reaction time-based concealed information test. Applied Cog- Gibbons, R. D., Hedeker, D. R., & Davis, J. M. (1993). Estimation of nitive Psychology, 27, 328–335. https://doi.org/10.1002/acp.2910. effect size from a series of experiments involving paired compari- Ovaysikia, S., Thari, K. A., Chan, J. L., & DeSouza, F. X. (2010). Word sons. Journal of Educational Statistics, 18, 271–279. https://doi. wins over face: Emotional Stroop effect activates the frontal corti- org/10.3102/10769986018003271. cal network. Frontiers in Human Neuroscience, 4, 234. https://doi. Gorini, A., Gaggioli, A., & Riva, G. (2007). Virtual worlds, org/10.3389/fnhum.2010.00234. real healing. Science, 318, 1549b. https://doi.org/10.1126/ Parliament, L., & Yarmey, A. D. (2002). Deception in eyewitness iden- science.318.5856.1549b. tic fi ation. Criminal Justice and Behavior, 29, 734–746. https://doi. Kassin, S. M., Bogart, D., & Kerner, J. (2012). Confessions that corrupt: org/10.1177/009385402237925. Evidence from the DNA exoneration case files. Psychological Sci- Ramon, M., Caharel, S., & Rossion, B. (2011). The speed of recogni- ence, 23, 41–45. https://doi.org/10.1177/0956797611422918. tion of personally familiar faces. Perception, 40, 437–449. https:// Kim, K., Park, K. K., & Lee, J.-H. (2014). The influence of arousal doi.org/10.1068/p6794. and expectation on eyewitness memory in a virtual environment. Riva, G. (2005). Virtual reality in psychotherapy: Review. CyberPsy- Cyberpsychology, Behavior, and Social Networking, 17, 709i713. chology and Behavior, 8, 220–230. https://doi.org/10.1089/ https://doi.org/10.1089/cyber.2013.0638. cpb.2005.8.220. klein Selle, N., Verschuere, B., Kindt, M., Meijer, E., & Ben-Shakhar, Sagana, A., Sauerland, M., & Merckelbach, H. (2014). ‘This is the G. (2017). Unraveling the roles of orienting and inhibition in person you selected’: Eyewitnesses’ blindness for their own facial the concealed information test. Psychophysiology. https://doi. recognition decisions. Applied Cognitive Psychology, 28, 753– org/10.1111/psyp.12825. 764. https://doi.org/10.1002/acp.3062. Kleinberg, B., & Verschuere, B. (2015). Memory detection 2.0: The Sagana, A., Sauerland, M., & Merckelbach, H. (2015). Eyewitnesses’ first web-based memory detection test. PLoS One, 10, e0118715. blindness for own- and other-race identification decisions. In A. https://doi.org/10.1371/journal.pone.0118715. Sagana, A blind man’s bluff: Choice blindness in eyewitness tes- Kleinberg, B., & Verschuere, B. (2016). The role of motivation to avoid timony (Doctoral dissertation) (pp. 105–120). Maastricht: Maas- detection in reaction time-based concealed information detection. tricht University. http://pub.maastrichtuniversity.nl/0ac07c43- Journal of Applied Research on Memory and Cognition, 5, 43–51. c029-4d52-8f91-c8b3b6932d39. Accessed 30 Mar 2017. https://doi.org/10.1016/j.jarmac.2015.11.004. Sauerland, M., Krix, A. C., van Kan, N., Glunz, S., & Sak, A. (2014). Leach, A.-M., Cutler, B. L., & van Wallendael, L. (2009). Line- Speaking is silver, writing is golden? The role of cognitive ups and eyewitness identification. Annual Review of Law and and social factors in written versus spoken witness accounts. Social Science, 5, 157–178. https://doi.org/10.1146/annurev. Memory and Cognition, 42, 978–992. https://doi.org/10.3758/ lawsocsci.093008.131529. s13421-014-0401-6. 1 3 1222 Psychological Research (2019) 83:1210–1222 Sauerland, M., Sagana, A., & Sporer, S. L. (2012). Assessing non- Tredoux, C. G. (1998). Statistical inference on measures of lineup fair- choosers’ eyewitness identification accuracy from photographic ness. Law and Human Behavior, 22, 217–237. https://doi.org/10 showups by using confidence and response times. Law and .1023/A:1025746220886. Human Behavior, 36, 394–403. https://doi.org/10.1037/h0093926. Tredoux, C. G. (1999). Statistical considerations when determin- Schultheis, M. T., & Rizzo, A. A. (2001). The application of virtual ing measures of lineup size and lineup bias. Applied Cognitive reality technology in rehabilitation. Rehabilitation Psychology, Psychology, 13, 9–26. https://doi.org/10.1002/(SICI)1099- 46, 296–311. https://doi.org/10.1037/0090-5550.46.3.296. 0720(199911)13:1+%3CS9:AID-ACP634%3E3.0.CO;2-1. Seymour, T. L., & Kerlin, J. R. (2008). Successful detection of ver- Verschuere, B., Ben-Shakhar, G., & Meijer, E. (2011). Memory detec- bal and visual concealed knowledge using an RT-based para- tion: Theory and application of the concealed information test. digm. Applied Cognitive Psychology, 22, 475–490. https://doi. Cambridge: Cambridge University Press. org/10.1002/acp.1375. Verschuere, B., Crombez, G., De Clercq, A., & Koster, E. H. W. (2004). Seymour, T. L., & Schumacher, E. H. (2009). Electromyographic evi- Autonomic and behavioral responding to concealed information: dence for response conflict in the exclude recognition task. Cogni- Differentiating orienting and defensive responses. Psychophysiol- tive Affective and Behavioral Neuroscience, 9, 71–82. https://doi. ogy, 41, 461–466. https://doi.org/10.1111/j.1469-8986.00167.x. org/10.3758/CABN.9.1.71. Verschuere, B., & De Houwer, J. (2011). Detecting concealed infor- Seymour, T. L., Seifert, C. M., Shafto, M. G., & Mosmann, A. L. mation in less than a second: Response latency-based measures. (2000). Using response time measures to asses “guilty knowl- Memory Detection Theory and Application of the Concealed edge”. Journal of Applied Psychology, 85, 30–37. https://doi. Information Test, 46–62. org/10.1037/0021-9010.85.1.30. Verschuere, B., & Kleinberg, B. (2015). ID-check: Online concealed Steblay, N. K., Dysart, J. E., & Wells, G. L. (2011). Seventy-two tests information test reveals true identity. Journal of Forensic Sci- of the sequential lineup superiority effect: A meta-analysis and ences, 61, S237-S240. https://doi.org/10.1111/1556-4029.12960. policy discussion. Psychology Public Policy and Law, 17, 99–139. Verschuere, B., & Kleinberg, B. (2016). Assessing autobiographical https://doi.org/10.1037/a0021650. memory: The web-based autobiographical implicit association Stormark, K. M. (2004). Skin conductance and heart-rate responses as test. Memory, 1–11. https://doi.org/10.1080/09658211.2016.11 indices of covert face recognition in preschool children. Infant and 89941. Child Development, 13, 423–433. https://doi.org/10.1002/icd.368. Verschuere, B., Kleinberg, B., & Theocharidou, K. (2015). Reaction Suchotzki, K., Crombez, G., Smulders, F. T., Meijer, E., & Ver- time-based memory detection: Item saliency effects in the sin- schuere, B. (2015). The cognitive mechanisms underlying decep- gle-probe and the multiple-probe protocol. Journal of Applied tion: An event-related potential study. International Journal Research in Memory and Cognition, 4, 59–65. https://doi. of Psychophysiology, 95, 395–405. https://doi.org/10.1016/j. org/10.1016/j.jarmac.2015.01.001. ijpsycho.2015.01.010. Visu-Petra, G., Miclea, M., & Visu-Petra, L. (2012). Reaction time- Suchotzki, K., Verschuere, B., Peth, J., Crombez, G., & Gamer, M. based detection of concealed information in relation to individual (2014). Manipulating item proportion and deception reveals differences in executive functioning. Applied Cognitive Psychol- crucial dissociation between behavioral, autonomic, and neural ogy, 26, 342–351. https://doi.org/10.1002/acp.1827. indices of concealed information. Human Brain Mapping, 36, Wells, G. L., Memon, A., & Penrod, S. D. (2006). Eyewit- 427–439. https://doi.org/10.1002/hbm.22637. ness evidence: Improving its probative value. Psychologi- Suchotzki, K., Verschuere, B., Van Bockstaele, B., Ben-Shakhar, G., & cal Science in the Public Interest, 7, 45–75. https://doi. Crombez, G. (2017). Lying takes time: A meta-analysis on reac- org/10.1111/j.1529-1006.2006.00027.x. tion time measures of deception. Psychological Bulletin. https:// Wells, G. L., Small, M., Penrod, S., Malpass, R. S., Fulero, S. M., doi.org/10.1037/bul0000087. & Brimacombe, C. E. (1998). Eyewitness identification proce- Technical Working Group for Eyewitness Evidence. (1999). Eyewit- dures: Recommendations for lineups and photospreads. Law ness evidence: A guide for law enforcement. Washington, DC: US and Human Behavior, 22, 603–647. https://doi.org/10.102 Department of Justice, Office of Justice Programs. http://ncjrs. 3/A:1025750605807. gov/pdffiles1/nij/178240.pdf. Accessed 30 Mar 2017. 1 3

Journal

Psychological ResearchSpringer Journals

Published: Nov 27, 2017

There are no references for this article.