Behav Res (2018) 50:1047–1054 DOI 10.3758/s13428-017-0925-3 A standardized set of 3-D objects for virtual reality research and applications David Peeters Published online: 23 June 2017 The Author(s) 2017. This article is an open access publication Abstract The use of immersive virtual reality as a research visual stimuli in such an experimental context led to theoret- tool is rapidly increasing in numerous scientific disciplines. ically interesting findings such as that words are named faster By combining ecological validity with strict experimental than pictures and that pictures are named faster in one’sfirst control, immersive virtual reality provides the potential to de- language than in one’s second language (Cattell, 1885; Levelt, velop and test scientific theories in rich environments that 2013). Over the years, picture-naming tasks have continued to closely resemble everyday settings. This article introduces play a pivotal role in psychological and neurological re- the first standardized database of colored three-dimensional search—for instance, in the development of cognitive models (3-D) objects that can be used in virtual reality and augmented of speech production (e.g., Levelt, Roelofs, & Meyer, 1999). reality research and applications. The 147 objects have been Reaching meaningful and valid theoretical conclusions normed for name agreement, image agreement, familiarity, critically hinges on the use of well-controlled experimental visual complexity, and corresponding lexical characteristics stimuli. Therefore, standardized, normative databases of pic- of the modal object names. The availability of standardized ture stimuli have been crucial in controlling for the factors that 3-D objects for virtual reality research is important, because influence picture recognition and picture-naming latencies, as reaching valid theoretical conclusions hinges critically on the well as in enabling the comparison of results across different use of well-controlled experimental stimuli. Sharing standard- studies and different samples of participants. The most influ- ized 3-D objects across different virtual reality labs will allow ential standardized picture database to date was developed by for science to move forward more quickly. Snodgrass and Vanderwart (1980). It consists of 260 black- and-white line drawings standardized for name agreement (the degree to which participants produce the same name for a . . . Keywords Virtual reality 3D-objects Database Stimuli given picture), image agreement (the degree to which partic- ipants’ mental image of a concept corresponds to the visually Visual representations of individual objects have been an es- depicted concept), familiarity (the degree to which partici- sential type of experimental stimulus in several domains of pants come in contact with or think about a depicted concept scientific inquiry including attention, language, memory, and in everyday life), and visual complexity (the amount of detail visual perception research. Already at the end of the 19th or intricacy of line in the picture) in native speakers of century, James McKeen Cattell developed an ingenious in- American English. Over the years, similar picture databases strument that allowed for the consecutive presentation of in- have been introduced and standardized for other languages, dividual pictures of objects (and other visual stimuli such as including British English, Bulgarian, Dutch, French, German, words and numerals) to an observer (Cattell, 1885). The use of Hungarian, Icelandic, Italian, Japanese, Mandarin Chinese, andModernGreek (Alario&Ferrand, 1999;Barry, Morrison, & Ellis, 1997; Bonin, Peereman, Malardier, Méot, * David Peeters &Chalard, 2003;Dell’Acqua, Lotto, & Job, 2000; firstname.lastname@example.org Dimitropoulou, Duñabeitia, Blitsas, & Carreiras, 2009; Martein, 1995; Nishimoto, Miyawaki, Ueda, Une, & Max Planck Institute for Psycholinguistics, P.O. Box 310, NL-6500 AH Nijmegen, The Netherlands Takahashi, 2005; Nisi, Longoni, & Snodgrass, 2000; 1048 Behav Res (2018) 50:1047–1054 Sanfeliu & Fernandez, 1996; Szekely et al., 2004;Van for VR experiments and applications would be an important Schagen, Tamsma, Bruggemann, Jackson, & Michon, 1983; step forward in facilitating such research and making the find- Viggiano, Vannucci, & Righi, 2004; Vitkovitch & Tyrrell, ings comparable across different studies and different groups 1995; Wang, 1997). of participants. Such black-and-white line drawings typically used in ex- The present study, therefore, introduces a database of 147 periments are abstractions of real-world objects. They lack the colored 3-D objects standardized for name agreement, image texture, color, and shading information of the natural objects agreement, familiarity, visual complexity, and corresponding that we encounter in the real world. One may therefore doubt lexical characteristics of the modal object names. The 3-D whether results obtained in studies using line drawings will objects are freely available from an online database and can fully generalize to everyday situations. In a first attempt to be used for VR and augmented reality research and applica- increase the ecological validity of experimental stimuli, stan- tions. Researchers may use them in the virtual, 3-D equiva- dardized databases have been developed that include gray- lents of traditional object recognition and object-naming ex- scale or colored photographs of objects (e.g., Adlington, periments, to test whether original findings will generalize to Laws, & Gale, 2009; Brodeur, Dionne-Dostie, Montreuil, & situations of more naturalistic vision that include depth cues Lepage, 2010; Migo, Montaldi, & Mayes, 2013; Moreno- and richer environments (e.g., Eichert, Peeters, & Hagoort, Martínez & Montoro, 2012; Viggiano et al., 2004). Indeed, 2017; Tromp, Peeters, Meyer, & Hagoort, 2017). Moreover, in certain cases color information in a picture or a line drawing these 3-D objects can be used in any virtual setting that re- enhances object recognition, such as when several objects quires the presence of objects. Using a 3-D object from the within a category (e.g., types of fruit) have relatively similar database will be faster than designing the object from scratch. shapes (e.g., apple, orange, peach) but different diagnostic Moreover, on the basis of the standardized norms, researchers colors (see, e.g., Laws & Hunter, 2006; Price & Humphreys, may select 3-D objects that fit the purpose of their specific 1989; Rossion & Pourtois, 2004; Wurm, Legge, Isenberg, & research question. Luebker, 1993). Importantly, the use of more ecologically valid stimuli significantly increases the odds of experimental findings being generalizable to everyday situations of object Method recognition, naming, and memory. Despite the availability of color and surface details in photographs of objects, there is Participants still a large gap between observing a picture of an object on a small computer monitor in the lab and encountering that ob- A total of 168 native Dutch speakers (84 female, 84 male; ject in the real world. One important difference is the two- mean age 22 years old; age range 18–31 years) participated dimensional (2-D) nature of the line drawing or photograph in the study. Each task (name agreement, image agreement, versus the three-dimensional (3-D) nature of the objects we familiarity, and visual complexity) included 42 different par- encounter in the wild. ticipants (21 female, 21 male). One additional participant in In further pursuit of establishing the ecological validity of the name agreement task was replaced due to technical prob- psychological and neuroscientific findings and theory in gen- lems during the experiment. All of the participants were eral, researchers have now started to exploit recent advances Dutch; studied in Nijmegen, The Netherlands; and had in immersive virtual reality (VR) technology (see Bohil, Dutch as their single native language. They were university Alicea, & Biocca, 2011; Fox, Arena, & Bailenson, 2009; students, which means that they had been enrolled in at least Peeters & Dijkstra, 2017; Slater, 2014). In immersive virtual 12 years of formal education. They all had normal or environments, participants’ movements are tracked and their corrected-to-normal vision and no language or hearing impair- digital surroundings rendered, usually via large projection ments or history of neurological disease. The participants pro- screens or head-mounted displays (Fox et al., 2009). This vided informed consent and were paid for participation. allows researchers to immerse participants in rich environ- Ethical approval for the study was granted by the ethics board ments that resemble real-world settings, while maintaining full of Radboud University’s Faculty of Social Sciences. experimental control. Critically, such environments will often contain a multitude of 3-D objects. One can think of the fur- Materials niture in a virtual classroom, the food items in a virtual res- taurant, the groceries in a virtual supermarket, or even the A total of 150 3-D objects were created by an in-house clothes that a virtual agent or avatar is wearing. Whether par- graphics designer for ongoing experimental VR studies in ticipants recognize the 3-D objects will depend, among other our lab. The objects were created for immersive virtual envi- factors, on those objects’ graphical quality. However, produc- ronments using the 3-D computer graphics software Autodesk ing realistic 3-D objects takes time as well as graphic design Maya (Autodesk Inc., 2016). Each object was designed to skills. An open-access database of standardized 3-D objects represent a stereotypic instance of a specific object name. Behav Res (2018) 50:1047–1054 1049 The objects belonged to several different semantic categories, The eyes of the participant were approximately 180 cm away including food items, furniture, clothing, toys, and vehicles from the middle screen. Objects were presented one by one in (see the URL provided below and Fig. 1). The texture added random order against a simple background for 7 s on the to the objects’ surfaces was either custom-made in the center of the screen facing the participant. We aimed to present graphics software or taken from freely available pictures from the objects in expected real-world size. A number (1 to 150) the Internet. Three objects were not included in the database, was displayed next to the object that corresponded with a because the majority of participants in the name agreement number on the answer sheet or file. The procedure in each of task did not recognize the object intended by the designer. the four tasks was kept similar to the procedure used for stan- Hence, the database contains 147 objects in total. All of these dardization of picture databases (e.g., Snodgrass & objects are made freely available from an online source in Vanderwart, 1980). For all four tasks, participants were in- both .OSGB and .FBX format, such that they can be used with formed that we were setting up a database of 3-D objects made the Vizard or Unity 3D software. For each object, an.OSGB by an in-house designer and that we would like to know peo- file, an.fbx file, and a 2-D screenshot are provided at https:// ple’s opinion about the objects. Each task consisted of a single hdl.handle.net/1839/CA8BDA0E-B042-417F-8661- session without breaks. To include as many objects as possible 8810B57E6732@view.Figure 1 presents screenshots of a in the database, no practice session with practice objects pre- subset of the objects (in 2-D). ceded the task. Instead, the experimenter checked before the start of the experiment whether the participant completely understood the task. For these simple tasks, this procedure Procedure worked well. In the name agreement task, participants were instructed to In each task, after having provided informed consent, partici- carefully look at the object and type the name of each object pants were seated in a chair in the middle of a CAVE system into a laptop they held on their lap. They were told that a name (Cruz-Neira, Sandin, & DeFanti, 1993), such that the three could consist of a maximum of two words. They were asked to screens covered their entire horizontal visual field (see below). type in BOO^ (for Object Onbekend, Bunknown object^)if They put on VR glasses, which were part of a tracking-system they did not know the object, BNO^ (for Naam Onbekend, that monitored the position and direction of the participant’s Bname unknown^) if they knew the object but not its name, head, controlling the correct perspective of the visual display. and BPT^ (for Puntje van de Tong, Btip of the tongue^)for objects that elicited a tip-of-the-tongue state. Henceforth, these answer options will be referred to by their commonly used English acronyms: respectively, DKO (Bdon’tknowthe object^), DKN (Bdon’t know the name^), and TOT (Btip of the tongue^). Participants were told that they had 7 s to look at each object and type in its name. The task took about 25 min. In the image agreement task, participants were instructed that for each object they would first see its name (i.e., the modal name derived from the name agreement task, defined as the unique name that was produced by the largest number of participants in the name agreement task) on the 3-D screen in front of them for 4 s, after which they would see the corre- sponding 3-D object for 7 s. They were instructed to (passively—i.e., without saying it out loud) read the name of the object and imagine what an object corre- sponding to that name would normally look like. On a rating form, they then rated for each object the corre- spondence between their mental image and the present- ed 3-D object on a 5-point scale. A rating of 1 indicat- ed low agreement, which meant a poor match to their mental image. A rating of 5 indicated high agreement, which meant a strong match to their mental image. For each object they were asked to circle Geen Beeld (Bno image^) if they did not manage to form a mental image Fig. 1 Screenshots of a subset of the standardized 3-D objects for an object, and Ander Object (Bdifferent object^)if 1050 Behav Res (2018) 50:1047–1054 they had a different object in mind than the one Results and discussion depicted. This task took about 35 min. In the familiarity task, participants were instructed to look Table 1 presents summary statistics for the following collected at each object and rate on a 5-point scale how familiar they norms: the H statistic, the percentage of participants producing were with the object. Familiarity was defined as the degree to the modal name, image agreement, familiarity, and visual which the participant usually comes in contact with the object complexity. An Excel file available in the online database or thinks about the concept. A rating of 1 indicated that the presents average measures for each individual 3-D object, in participant was not familiar at all with the object. A rating of 5 addition to the length (nchar; i.e., the number of letters) and indicated that the participant was very familiar with the object. lexical frequency of the modal name (SUBTLEXWF) derived This task took about 25 min. from the online SUBTLEX-NL database (Keuleers, In the visual complexity task, participants were instructed to Brysbaert, & New, 2010), its English translation, and the ob- look at each object and rate on a 5-point scale how visually ject’s semantic category. Moreover, for the name agreement complex they found it. Complexity was defined as the amount task, the numbers of DKO responses (1.33%), DKN responses of detail or the intricacy of the lines in each object. Color was (0.63%), and TOT responses (0.47%) are reported, as well as not mentioned in the instructions. A rating of 1 indicated an all nonmodal responses for each object. The average percent- object with very few details, and a rating of 5 indicated a very age of nonmodal responses obtained was 25.01%. Also, the detailed object. This task took about 25 min. number of times each object elicited the responses Bno image^ (0.34%) or Bdifferent object^ (1.25%) in the image agreement Apparatus task are provided online. On the basis of all these measures, researchers may select the 3-D objects that best fit the purpose The CAVE system consisted of three screens (255 cm × 330 of their study or application. cm; VISCON GmbH, Neukirchen-Vluyn, Germany) that Table 2 presents the results of a Pearson correlation analy- were arranged at right angles. Two projectors (F50, Barco sis between the different collected norms. Similar to other N.V., Kortrijk, Belgium) illuminated each screen indirectly standardized databases, a significant negative correlation by means of a mirror behind the screen. For each screen, the was observed between the H statistic and the percentage of two projectors showed two vertically displaced images that participants producing the modal name (see Brodeur et al., were overlapping in the middle of the screen. Thus, the com- 2010, for an overview). This indicates that the 3-D objects plete display on each screen was visible only as a combined that elicited a larger number of different unique names also overlay of the two projections. Each object was presented on elicited a lower percentage of participants producing the mod- the screen facing the participants. al name. The correlations between image agreement and the For optical tracking, infrared motion capture cameras two measures of name agreement indicate that the 3-D objects (Bonita 10, Vicon Motion Systems Ltd, UK) and the Tracker that elicited a larger number of different labels evoked mental 3 software (Vicon Motion Systems Ltd, UK) were used. Six images that were more different from each actual 3-D object. cameras were positioned at the upper edges of the CAVE Furthermore, more familiar 3-D objects had larger overlap screens, and four cameras were placed at the bottom edges. All cameras were oriented toward the middle of the CAVE Table 1 Summary statistics for all elicited data system. Optical head-tracking was accomplished by placing light reflectors on both sides of the VR glasses. Three spher- H %NA IA Fam VC ical reflectors were connected on a plastic rack, and two such M 1.05 74.99 3.91 3.20 2.69 racks with a mirrored version of the given geometry were SD 0.95 22.98 0.56 0.75 0.61 manually attached to both sides of the glasses. The reflectors Median 0.81 80.95 3.98 3.12 2.69 worked as passive markers that can be detected by the infrared Range 3.88 76.19 2.78 3.07 2.69 tracking system in the CAVE. The tracking system was trained Min 0 23.81 2.17 1.76 1.45 to the specific geometric structure of the markers and detected Max 3.88 100 4.95 4.83 4.14 the position and orientation of the glasses with an accuracy of Q1 0.23 59.52 3.51 2.64 2.21 0.5 mm. Q3 1.67 95.24 4.32 3.71 3.13 A control room was located behind the experimental room containing the CAVE setup. The experimenter could visually IQR 1.44 35.71 0.82 1.07 0.92 Skew 1.50 0.67 0.73 1.25 0.93 inspect the participant and the displays on the screens through a large window behind the participant. The four tasks were H, name agreement; %NA, percentage name agreement; IA, image agree- programmed and run using Python-based 3-D application ment; Fam, familiarity; VC, visual complexity; Q1, 25th percentile; Q3, software (Vizard, Floating Client 5.4; WorldViz LLC, Santa 75th percentile; IQR, interquartile range (Q3–Q1); skew = (Q3 – Barbara, CA). Median)/(Median – Q1), >1 indicates a positive skew Behav Res (2018) 50:1047–1054 1051 Table 2 Correlation matrix for the collected norms impaired population (Adlington et al., 2009; Laws, 2005). Furthermore, comparison of the 25th (Q1) and 75th (Q3) per- H %NA IA Fam VC nchar WF centile scores (Table 1) to the average scores for individual H 1.000 items in the online database will facilitate the selection of ** %NA –.959 1.000 items from the extremes of the distribution. ** ** IA –.453 .391 1.000 The familiarity measure in the present study yielded a result Fam –.001 –.013 .200 1.000 numerically similar to that from the line drawing database by Snodgrass and Vanderwart (1980), which is slightly lower VC –.054 .072 –.042 .056 1.000 nchar .115 –.133 .145 –.086 .180 1.000 than the average familiarity ratings from the three databases with colored photographs. This difference may be due to the WF .064 –.022 –.007 –.024 –.015 –.183 1.000 fact that both line drawings and 3-D objects are designed from H, name agreement; %NA, percentage name agreement; IA, image agree- scratch by a designer, whereas photographs of objects, by ment; Fam, familiarity; VC, visual complexity; nchar, number of charac- ** definition, represent objects more directly. Nevertheless, pho- ters (i.e., word length); WF, word frequency. p < .01 (two-tailed), tographs and line drawings are typically 2-D abstractions of an p < .05 (two-tailed) actual 3-D real-world object. They represent an object, but with participants’ mental images, as indicated by the positive they are not the represented object itself. In the case of 3-D correlation between image agreement and familiarity. Finally, VR research, however, a participant’s full immersion in a vir- besides the commonly observed negative correlation between tual world means that he or she should experience the 3-D word length and word frequency, longer words were also rated objects as real objects. This difference also explains why cer- as being visually more complex. This raises the interesting tain semantic categories are not represented in the present possibility that there may be an iconic relationship between database, though they are present in previous picture data- the visual complexity of an object and the length of the verbal bases. Whereas traditional databases include, for instance, line label it elicits (see Perniss & Vigliocco, 2014, for an overview drawings or (manipulated) photographs of individual body of work on iconicity in human language). parts (Adlington et al., 2009; Duñabeitia et al., 2017; Table 3 shows the mean values for name agreement, image Moreno-Martínez & Montoro, 2012; Snodgrass & agreement, familiarity, and visual complexity of the present 3- Vanderwart, 1980), no 3-D body parts are provided in the D object database and the corresponding mean values in three present database. Showing an individual body part in a 3-D recently published databases that contain colored photographs virtual environment might decrease the participant’sexperi- (Adlington et al., 2009; Brodeur et al., 2010; Moreno- ence of presence in the virtual world, since people usually Martínez & Montoro, 2012). Furthermore, the corresponding do not come across individual, detached body parts in every- day life. mean values from the Snodgrass and Vanderwart (1980) black-and-white line drawing database are also given. The Numerically, the average image agreement and visual com- stimuli in the present 3-D object database and in the colored plexity of the 3-D objects in the present study are comparable photograph databases on average elicited lower name agree- to the norms for photographs and line drawings from the four ment scores than did the line drawings by Snodgrass and other databases. The overall numerical similarity in image Vanderwart. A lower overall name agreement facilitates the agreement suggests that, across the evaluated databases, par- selection of stimuli that do not yield a ceiling effect in healthy ticipants on average agreed to a similar extent with the col- participants in relatively simple tasks such as picture naming lected modal names. The overall similarity in average visual or object naming. This may be desirable when behavior in a complexity scores suggests that the depicted objects in the healthy population is being compared to behavior in an present database, despite their 3-D nature, were not evaluated Table 3 Overview of standardized measures in the present study, three recent colored photograph databases, and the canonical line drawings database by Snodgrass and Vanderwart (1980) NH %NA IA Fam VC Present study 147 1.05 74.99 3.91 3.20 2.69 Adlington et al. (2009) 147 1.11 67.61 n.e. 3.76 2.89 Brodeur et al. (2010) 480 1.65 64 3.9 4 2.4 Moreno-Martínez and Montoro (2012) 360 0.94 72 n.e. 3.56 2.55 Snodgrass and Vanderwart (1980) 260 0.56 n.e. 3.69 3.29 2.96 Mean scores for name agreement, image agreement, familiarity, and visual complexity are provided for comparison. N, number of objects/images; H, name agreement; %NA, percentage name agreement; Fam, familiarity; VC, visual complexity; n.e., not evaluated 1052 Behav Res (2018) 50:1047–1054 as being visually more complex than the stimuli in earlier Vanderwart (1980). No correlational analyses were performed databases. Note, however, that this might change if 2-D pho- between the present database and the image database provided tographs of objects were directly compared to our 3-D objects by Adlington et al. (2009), because only six modal names in the same study with the same group of participants. were the same across the two databases. A more direct comparison of the present database to earlier Overall, significant correlations between the present data- databases was performed by running correlational analyses base and previous databases in terms of name agreement were across items that elicited the same modal names across pairs either absent (Moreno-Martínez & Montoro, 2012;Snodgrass of databases. Table 4 presents the results of these separate & Vanderwart, 1980) orweak(Brodeuretal., 2010). Thus, Pearson correlational analyses testing for correlations in name although a modal name may be the same across studies, this agreement, image agreement, familiarity, and visual complex- does not imply that the name agreement for that specific item ity between the norms in the present study and those obtained was also similar. This is not surprising, because different stim- using three previous stimulus databases. Items were included uli and different languages (Dutch, English, and Spanish) in a correlational analysis when the literal English translation were involved in the different studies. Weak to moderate sig- of a 3-D object’s Dutch modal name corresponded to the nificant positive correlations were observed between the pres- English modal name in the database that was included in the ent database and previous databases in terms of image agree- analysis for comparison. The present database has 63 modal ment. This suggests that, overall, certain modal names (e.g., names in common with the photo database introduced by Bhammer^) elicit a highly stable mental image that is clearly Brodeur et al. (2010). Moreover, it has 33 modal names in represented by both picture stimuli and our 3-D object. Other common with the color image database described in modal names (e.g., Blamp^) may consistently elicit lower im- Moreno-Martínez and Montoro (2012). Fifty-two modal age agreement across different studies because there is more names from the present database were also elicited as modal variance in the mental images each elicits (e.g., different types names in the line-drawing database by Snodgrass and of lamps) across the participants within studies. Moderate to strong significant positive correlations were observed between the present database and the three earlier databases for both Table 4 Correlations (r) between ratings for 3-D objects from the familiarity and visual complexity in all three comparisons (see present study, two recent colored photograph databases, and the Table 4). canonical line drawings database by Snodgrass and Vanderwart (1980) The familiarity result indicates that, broadly speaking, ob- Present study jects that were normed as more or less familiar in the present study were also more or less familiar to the participants who NH %NA IA Fam VC provided norms in the earlier picture databases. This can be explained by the fact that the participants providing norms for Brodeur et al. (2010)63 * the different databases have all lived in Western cultures in H .275 which they may encounter similar objects in their daily life. %NA .211 ** Some cultural differences in the familiarity of specific objects IA .469 may exist, for example, in different culture-specific types of ** Fam .516 food (e.g., the typical Dutch pastry tompouce that was includ- ** VC .569 ed in the present database, or the crème caramel in Moreno- Moreno-Martínez and 33 Martínez & Montoro, 2012). Such items were, however, by Montoro (2012) H .160 definition not included in these analyses, because they were %NA –.069 present in only one of the databases. IA n.e. The positive correlations in terms of visual complexity ** Fam .670 suggest that the objects depicted as visually more or less com- ** plex in the earlier databases were also designed and rated as VC .584 being visually more or less complex in the present database. Snodgrass and Vanderwart 52 (1980) This overlap is explained by the inherent degree of visual H .189 complexity present in objects in everyday life, which is con- %NA .066 sequently represented as such in line drawings, pictures, and ** IA .420 3-D objects based on these real-world objects. ** Fam .684 All in all, the comparisons of the present 3-D object data- ** VC .509 base to four previous databases confirm the validity of the present set of 3-D objects. On the basis of these results, the N, number of items included in the analyses; H, name agreement; %NA, present standardized 3-D object database sets the stage for percentage name agreement; IA, image agreement; Fam, familiarity; VC, ** * better comparability of scientific findings that can result from visual complexity. p < .01 (two-tailed), p <.05 (two-tailed) Behav Res (2018) 50:1047–1054 1053 and name agreement. Quarterly Journal of Experimental the use of immersive VR and augmented reality settings with- Psychology, 50A, 560–585. doi:10.1080/783663595 in and across research labs and participant groups. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuro- science research and therapy. Nature Reviews Neuroscience, 12, 752–762. Conclusion Bonin, P., Peereman, R., Malardier, N., Méot, A., & Chalard, M. (2003). A new set of 299 pictures for psycholinguistic studies: French norms This study has introduced the first standardized set of 3-D for name agreement, image agreement, conceptual familiarity, visual objects for VR and augmented reality research and applica- complexity, image variability, age of acquisition, and naming laten- tions. The objects are freely available and can be selected as a cies. Behavior Research Methods, Instruments, & Computers, 35, 158–167. doi:10.3758/BF03195507 function of the aim of a specific study or application, on the Brodeur, M. B., Dionne-Dostie, E., Montreuil, T., & Lepage, M. (2010). basis of the provided norms for name agreement, image agree- The Bank of Standardized Stimuli (BOSS), a new set of 480 norma- ment, familiarity, visual complexity, and the lexical character- tive photos of objects to be used as visual stimuli in cognitive re- istics of the object’s modal name. The 3-D objects can be search. PLoS ONE, 5, e10773. doi:10.1371/journal.pone.0010773 adapted in size, color, texture, and visual complexity to fit Cattell, J. M. (1885). Über die Zeit der Erkennung und Benennung von Schriftzeichen, Bildern und Farben. Philosophische Studien, 2, 635– the purposes of individual studies and applications. Note, however, that the collected norms are representative only of Cruz-Neira, C., Sandin, D. J., & DeFanti, T. A. (1993). Surround-screen the 3-D objects as they are currently presented in the online projection-based virtual reality: The design and implementation of database. Modifying, for instance, an object’s texture or color the CAVE. In M. C. Whitton (Ed.), Proceedings of the 20th annual conference on computer graphics and interactive techniques (pp. might change any of the collected norms. The 3-D objects can 135–142). New York, NY: ACM Press. be used further for educational purposes as well as for testing Dell’Acqua, R., Lotto, L., & Job, R. (2000). Naming times and standard- patient populations in 3-D virtual environments. Researchers ized norms for the Italian PD/DPSS set of 266 pictures: Direct com- performing experiments in languages other than Dutch are parisons with American, English, French, and Spanish published invited to standardize the current set of 3-D objects for their databases. Behavior Research Methods, Instruments, & Computers, 32, 588–615. doi:10.3758/BF03200832 local language and to expand the database by adding more Dimitropoulou, M., Duñabeitia, J. A., Blitsas, P., & Carreiras, M. (2009). objects. Sharing standardized 3-D objects across different labs A standardized set of 260 pictures for Modern Greek: Norms for will move VR research forward more quickly. name agreement, age of acquisition, and visual complexity. Behavior Research Methods, 41, 584–589. doi:10.3758/BRM.41. 2.584 Acknowledgements Open access funding provided by Max Planck Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Society. Smolka, E., & Brysbaert, M. (2017). MultiPic: A standardized set of 750 drawings with norms for six European languages. The Author note I thank Jeroen Derks for designing the objects, Birgit Quarterly Journal of Experimental Psychology. doi:10.1080/ Knudsen for assistance in running the experiments, Jeroen Geerts for 17470218.2017.1310261 creating the online repository, Albert Russel and Reiner Dirksmeyer for Eichert, N., Peeters, D., & Hagoort, P. (2017). Language-driven anticipa- technical support, Peter Hagoort for making VR research possible at the tory eye movements in virtual reality. Behavior Research Methods Max Planck Institute, and two anonymous reviewers for valuable (in press). feedback. Fox, J., Arena, D., & Bailenson, J. N. (2009). Virtual reality: A survival guide for the social scientist. Journal of Media Psychology, 21, 95–113. Open Access This article is distributed under the terms of the Creative Keuleers, E., Brysbaert, M., & New, B. (2010). SUBTLEX-NL: A new Commons Attribution 4.0 International License (http:// measure for Dutch word frequency based on film subtitles. Behavior creativecommons.org/licenses/by/4.0/), which permits unrestricted use, Research Methods, 42, 643–650. doi:10.3758/BRM.42.3.643 distribution, and reproduction in any medium, provided you give appro- Laws, K. R. (2005). BIllusions of normality^: A methodological critique priate credit to the original author(s) and the source, provide a link to the of category-specific naming. Cortex, 41, 842–851. Creative Commons license, and indicate if changes were made. Laws, K. R., & Hunter, M. Z. (2006). The impact of colour, spatial resolution, and presentation speed on category naming. Brain and Cognition, 62, 89–97. References Levelt, W. J. M. (2013). A history of psycholinguistics. Oxford, UK: Oxford University Press. Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical Adlington, R. L., Laws, K. R., & Gale, T. M. (2009). The Hatfield Image access in speech production. Behavioral and Brain Sciences, 22, 1– Test (HIT): A new picture test and norms for experimental and 38. disc. 38–75. clinical use. Journal of Clinical and Experimental Martein, R. (1995). Norms for name and concept agreement, familiarity, Neuropsychology, 31, 731–753. doi:10.1080/13803390802488103 visual complexity and image agreement on a set of 216 pictures. Alario, F.-X., & Ferrand, L. (1999). A set of 400 pictures standardized for Psychologica Belgica, 35, 205–225. French: Norms for name agreement, image agreement, familiarity, Migo, E. M., Montaldi, D., & Mayes, A. R. (2013). A visual object visual complexity, image variability, and age of acquisition. stimulus database with standardized similarity information. Behavior Research Methods, Instruments, & Computers, 31, 531– Behavior Research Methods, 45, 344–354. 552. doi:10.3758/BF03200732 Moreno-Martínez, F. J., & Montoro, P. R. (2012). An ecological alterna- Barry, C., Morrison, C. M., & Ellis, A. W. (1997). Naming the Snodgrass tive to Snodgrass & Vanderwart: 360 high quality colour images and Vanderwart pictures: Effects of age of acquisition, frequency, 1054 Behav Res (2018) 50:1047–1054 Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 with norms for seven psycholinguistic variables. PLoS ONE, 7, pictures: Norms for name agreement, image agreement, familiarity, e37527. and visual complexity. Journal of Experimental Psychology: Nishimoto, T., Miyawaki, K., Ueda, T., Une, Y., & Takahashi, M. (2005). Human Learning and Memory, 6, 174–215. doi:10.1037/0278- Japanese normative set of 359 pictures. Behavior Research 73126.96.36.199 Methods, 37, 398–416. doi:10.3758/BF03192709 Szekely, A., Jacobsen, T., D’Amico, S., Devescovi, A., Andonova, E., Nisi, M., Longoni, A. M., & Snodgrass, J. G. (2000). Italian measurement Herron, D., … Bates, E. (2004). A new on-line resource for psycho- on the relation of name, familiarity, and acquisition age for the 260 linguistic studies. Journal of Memory and Language, 51, 247–250. figures of Snodgrass and Vanderwart. Giornale Italiano di doi:10.1016/j.jml.2004.03.002 Psicologia, 27, 205–218. Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2017). The combined Peeters, D., & Dijkstra, T. (2017). Sustained inhibition of the native lan- use of virtual reality and EEG to study language processing in nat- guage in bilingual language production: A virtual reality approach. uralistic environments. Behavior Research Methods.doi:10.3758/ Bilingualism: Language and Cognition (in press). s13428-017-0911-9 Perniss, P., & Vigliocco, G. (2014). The bridge of iconicity: From a world Van Schagen, I., Tamsma, N., Bruggemann, F., Jackson, J. L., & Michon, of experience to the experience of language. Philosophical J. A. (1983). Namen en normen voor plaatjes. Nederlands Tijdschrift Transactions of the Royal Society B, 369, 20130300. voor de Psychologie, 38, 236–241. Price, C. J., & Humphreys, G. W. (1989). The effects of surface detail on Viggiano, M. P., Vannucci, M., & Righi, S. (2004). A new standardized object categorization and naming. Quarterly Journal of set of ecological pictures for experimental and clinical research on Experimental Psychology, 41, 797–828. visual object processing. Cortex, 40, 491–509. doi:10.1016/S0010- Rossion, B., & Pourtois, G. (2004). Revisiting Snodgrass and 9452(08)70142-4 Vanderwart’s object pictorial set: The role of surface detail in Vitkovitch, M., & Tyrrell, L. (1995). Sources of disagreement in object basic-level object recognition. Perception, 33, 217–236. doi:10. naming. Quarterly Journal of Experimental Psychology, 48, 822– 1068/p5117 848. doi:10.1080/14640749508401419 Sanfeliu, M. C., & Fernandez, A. (1996). A set of 254 Snodgrass– Wang, M. Y. (1997). The evaluation of perceptual and semantic charac- Vanderwart pictures standardized for Spanish: Norms for name teristics for a set of object contour pictures. Chinese Journal of agreement, image agreement, familiarity, and visual complexity. Psychology, 39, 157–172. Behavior Research Methods, Instruments, & Computers, 28, 537– Wurm, L. H., Legge, G. E., Isenberg, L. M., & Luebker, A. (1993). Color 555. doi:10.3758/BF03200541 improves object recognition in normal and low vision. Journal of Slater, M. (2014). Grand challenges in virtual environments. Frontiers in Experimental Psychology: Human Perception and Performance, Robotics and AI, 1, 3. 19, 899–911. doi:10.1037/0096-15188.8.131.529
Behavior Research Methods – Springer Journals
Published: Jun 23, 2017
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera