Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss

Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss Abstract Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed. Studying optimal methods to support communication development in children with hearing loss has a venerable history and includes debates regarding the role of auditory and visual input (e.g., Ewing, Ewing, & Cockersole, 1938). Educational approaches for children with hearing loss vary in their emphasis on unisensory (auditory-only) versus multisensory (simultaneous auditory and visual) input because of differences in underlying multisensory processing theories and lack of evidence regarding the relative value of either the unisensory or the multisensory approach over the other. Supporters of each position strive to increase the child’s ability to integrate multisensory input for “real world” communication (Pollack, 1970). The difference is “wholly in the means by which this goal is to be reached” (Pollack, 1970, p. 42). Because there is limited empirical evidence as to which approach is more effective for which children with hearing loss and under what conditions, some children may not be reaching optimal spoken language outcomes. This study directly compares the effectiveness and efficiency of a word learning intervention under an auditory-only (AO; unisensory) intervention condition that prohibits access to speechreading cues versus an audiovisual (AV; multisensory) intervention condition that permits access to speechreading cues. Specifically, this investigation was designed to inform intervention practices with the goal of maximizing spoken language outcomes in children with hearing loss. Neither condition provides access to gestures, signs, pictures, or other forms of visual input other than visual speechreading cues in the AV intervention condition. This introduction describes the evidence supporting the two diverging positions (i.e., unisensory and multisensory) on how best to teach children with hearing loss to use auditory and visual information to facilitate spoken language skills. Then, the need for direct evidence of learning differences under AO and AV intervention conditions for children with hearing loss is presented. Finally, vocabulary deficits and the effectiveness of vocabulary instruction in children with hearing loss are discussed. Unisensory Theory Supporters of a unisensory perspective suggest that providing visual access to the speaker’s lips, tongue, and throat alongside auditory information competes with and inhibits a child’s processing of auditory information, which is then reflected in reduced spoken word acquisition. Pollack (1970) summarizes the unisensory position well by stating, “There can be no compromise, because once emphasis is placed upon ‘looking’ there will be divided attention, and the unimpaired modality, vision, will be victorious” (p. 18). This position has been used to support long-standing educational practices that aim to emphasize and highlight the auditory signal by limiting or eliminating access to visual cues, such as using a hand cue (i.e., covering the mouth with a flat, slanted hand; Estabrooks, 2001), speech hoops, speaking when the child is looking away, visual distractions, and positioning (Robbins, 2016). Evidence in favor of unisensory training is drawn from studies of AO training (e.g., Doehring & Ling, 1971; Ling, 1976) and of sensory deprivation and neural plasticity (e.g., de Villers-Sidani, Chang, Bao, & Merzenich, 2007; Sharma, Campbell, & Cardon, 2015). Interest in unisensory training to isolate the auditory sense to facilitate habilitation is centuries old (Pollack, 1970). Much of the evidence for unisensory training can be traced back to the 1940s to 1970s when use of AO training emerged in the United States (Pollack, 1970). Studies reported increased word perception and improved recall of sequences of verbal information through training using audition without access to speechreading (i.e., visual) cues (e.g., Doehring & Ling, 1971; Ling, 1976). However, these studies showed limited generalization. Further, because the studies used no-treatment comparison groups instead of a comparison multisensory intervention group, they did not directly address whether multisensory input during training is advisable (Doehring & Ling, 1971; Ling, 1976). More recent resources supporting unisensory training for children with hearing loss (e.g., Estabrooks, 2001; Rhoades, Estabrooks, Lim, & MacIver-Lux, 2016; Robbins, 2016) still rely on early studies as opposed to more recent empirical evidence (see McDaniel & Camarata, 2017 for a review). Studies of sensory deprivation and neural plasticity have been used to support AO training as well. One underlying motivation of AO training is to reduce the probability that brain regions that process auditory information will be repurposed to process another sense—namely vision (e.g., Sharma et al., 2015; Zupan & Sussman, 2009) or somatosensation (i.e., sense of touch; Glick & Sharma, 2017; Sharma & Glick, 2016; Sharma et al., 2015). Evidence from animal models indicates that after a period of total auditory deprivation, the auditory cortex does indeed undergo changes that result in relatively inefficient processing of auditory information even after auditory access is subsequently provided (de Villers-Sidani et al., 2007; Kral, Tillein, Heid, Hartman, & Klinke, 2005; Kral, Tillein, Heid, Klinke, & Hartmann, 2006; Zhang, Bao, & Merzenich, 2001). When these principles are applied to children with hearing loss, the unisensory theory predicts that AO training prevents cortical reorganization in favor of vision and instead supports processing of auditory information (Noreña & Eggermont, 2005; Polley, Steinberg, & Merzenich, 2006; Sharma, Gilley, Dorman, & Baldwin, 2007). The unisensory position asserts that providing access to visual input, particularly speechreading cues, would inhibit the ability for a child with hearing loss to process AO input, including spoken words, due to an overreliance on the visual system (Pollack, 1970). Multisensory Theory Supporters of a multisensory theory suggest that simultaneous, integrated AV input facilitates word learning, even when input is unequal across senses (e.g., in hearing loss). Evidence in favor of the multisensory position is drawn from studies of the multisensory nature of speech perception in natural contexts (e.g., Holt, Kirk, & Hay-McCutcheon, 2011), spoken word recognition tasks in unisensory versus multisensory conditions (Erber, 1969, 1972; Gilley, Sharma, Mitchell, & Dorman, 2010; Sumby & Pollack, 1954), and associations between the degree of benefit from visual cues with language skills and speech intelligibility in children with hearing loss (Bergeson, Pisoni, & Davis, 2003; Kirk, Pisoni, & Lachs, 2002). First, visual cues more heavily influence spoken word recognition for individuals with typical hearing than previously thought (Holt et al., 2011). Visual input can change auditory perception as demonstrated through the McGurk effect (McGurk & MacDonald, 1976), and auditory input can influence visual perception (Seitz, Kim, & Shams, 2006). Although multisensory functions were once thought to be restricted primarily to higher level brain regions and dedicated multisensory subcortical regions feeding these higher level brain regions (e.g., cortical association areas, premotor cortices, and sensorimotor subcortical regions), more recent evidence shows multisensory functions at early processing levels including the primary visual and auditory cortices (e.g., Ghazanfar & Schroeder, 2006; Murray, Lewkowicz, Amedi, & Wallace, 2016; Murray, Thelen, et al., 2016). Additionally, recent evidence indicates that multisensory processing emerges very early in development. Infants as young as two months show detection of mismatched auditory and visual stimuli for vowels (Patterson & Werker, 2003). Infants also exhibit perception of audiovisual synchrony (Lewkowicz, 2010). Second, replicated findings indicate the benefit for AV input relative to AO and visual-only (VO) input for speech recognition tasks for children and adults with and without hearing loss, especially under noisy conditions (e.g., Bergeson & Pisoni, 2004; Bergeson, Pisoni, & Davis, 2005; Campbell, 2008; Erber, 1969, 1972, 1979; Geers, Brenner, & Davidson, 2003; Gilley et al., 2010; Holt et al., 2011; Kirk et al., 2007; Lachs, Pisoni, & Kirk, 2001; Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007; Stevenson, Sheffield, Butera, Gifford, & Wallace, 2017; Sumby & Pollack, 1954). Investigators report superadditive effects (i.e., greater performance with multisensory stimuli than the summed performance with unisensory stimuli; Stanford & Stein, 2007) in word recognition accuracy in AV conditions compared with AO and VO conditions (e.g., Gilley et al., 2010). Nonetheless, individuals with hearing loss vary in the degree to which they fuse and benefit from AV stimuli relative to unisensory input. Prominent factors include audiological history and age of cochlear implantation—two measures of auditory experience (Bergeson & Pisoni, 2004; Bergeson et al., 2005; Gilley et al., 2010; Rouger, Fraysse, Deguine, & Barone, 2008; Schorr, Fox, van Wassenhove, & Knudseon, 2005). In alignment with these findings, numerous animal models show the importance of experience for developing multisensory integration abilities (e.g., Wallace & Stein, 2007; Wallace, Perrault, Hairston, & Stein, 2004; Yu, Rowland, & Stein, 2010). For example, cats required exposure to coordinated, not random, auditory and visual input to exhibit multisensory enhancement to AV stimuli versus unisensory stimuli (Xu, Yu, Rowland, Stanford, & Stein, 2012). Finally, AV integration and enhancement have been positively correlated with speech-language outcomes including receptive vocabulary, receptive and expressive language, and speech intelligibility in children with hearing loss (Bergeson et al., 2003, 2005; Kirk et al., 2002). These correlations suggest there might be shared variance between multisensory integration and gains in speech and language skills in children with hearing loss. Further research is needed to understand and apply this correlational evidence. In sum, given the AV multisensory nature of speech perception in natural contexts, the importance of experience for multisensory integration, benefits of multisensory input for speech recognition tasks, and positive correlations between AV integration and enhancement with speech-language outcomes, one must carefully consider the possible implications of unisensory versus multisensory intervention strategies for children with hearing loss. The degree and manner in which children with hearing loss receive access to multisensory input could have important implications for speech and language outcomes. Vocabulary Deficits and Instruction Effectiveness in Children with Hearing Loss Because vocabulary skills strongly predict a variety of long-term outcomes, including reading and other academic skills, the importance of word learning and vocabulary skills cannot be overemphasized (Johnson & Goswami, 2010; Qi & Mitchell, 2012). Many children with hearing loss understand and use fewer words than their peers with typical hearing, despite substantial gains in overall spoken language outcomes (Convertino, Borgna, Marschark, & Durkin, 2014; Davidson, Geers, & Nicholas, 2014; Houston, Stewart, Moberly, Hollich, & Miyamoto, 2012; Lund, 2016; Tomblin et al., 2015). They face an uphill battle learning to understand and use age expected vocabulary words due to a history of reduced auditory input, an impoverished auditory signal, and word learning deficits relative to peers with typical hearing (Anderson, 2015; Houston, Carter, Pisoni, Kirk, & Ying, 2005; Houston, Stewart, et al., 2012; Lederberg & Spencer, 2009; Lederberg, Prezbindowski, & Spencer, 2000). Children with hearing loss exhibit variation in word learning skills and use of word learning strategies, which may be attributed to a variety of factors including vocabulary size, age of implantation, and chronological age (Houston, Stewart, et al., 2012; Lederberg & Spencer, 2001, 2009; Lederberg et al., 2000; Robertson, von Hapsburg, & Hay, 2017). Continued investigation of the specific factors influencing word learning skills in children with hearing loss is needed to inform the development and implementation of appropriate vocabulary interventions for children with hearing loss who as a group exhibit vocabulary skills below age expectations (e.g., Lund, 2016). Additional investigation as to which specific features of instruction are most effective for increasing the vocabulary skills of children with hearing loss is also required. In a systematic review of vocabulary instruction for children with hearing loss aged 3- to 21-year-old, Luckner and Cooke (2010) identified 10 studies published between 1967 and 2008 that directly assessed the effects of a specific vocabulary intervention. Luckner and Cooke (2010) called for additional research on vocabulary instruction for children with hearing loss and highlighted the need for replication of published findings. Since 2008, several more studies have been published (e.g., Lund & Douglas, 2016; Lund & Schuele, 2014; Lund, Douglas, & Schuele, 2015; Sacks et al., 2013), but the overall number of investigations remains small. Nonetheless, one theme that has emerged through several empirical articles and reviews is the benefit of explicit (i.e., direct) instruction (e.g., Lund & Douglas, 2016), which is consistent with findings from children with typical hearing (Marulis & Neuman, 2010). Need for Direct Evidence Despite decades of debate between unisensory and multisensory theoretical positions, a literature search did not yield any studies of children with hearing loss directly comparing AO versus AV intervention conditions for word learning or other language tasks. Related findings from adults with cochlear implants or cochlear implant simulations via vocoders may not apply to children with hearing loss because of differences in auditory and language experiences (Bernstein, Eberhardt, & Auer, 2014; Pilling & Thomas, 2011; Stacey et al., 2010). Further, nearly all known experiments focus on word recognition rather than word learning (Fu & Galvin, 2007; Stevenson et al., 2017). One study of word learning in 6- to 10-year-old children with hearing loss used an intervention that provided simultaneous visual and auditory input through a computer-animated tutor; however, this study did not have a comparison condition without access to visual speechreading cues (Massaro & Light, 2004). Researchers and clinicians need direct evidence of how providing visual (speechreading) stimuli may impact the effectiveness and efficiency of word learning interventions for children with hearing loss. Although studies comparing oral communication versus total communication programs and sign language exposure (e.g., Bergeson & Pisoni, 2004; Bergeson et al., 2005; Geers et al., 2017; Geers, 2002; Houston, Beer, et al., 2012; Pisoni et al., 1999) are indirectly related to the current question of AO versus AV intervention conditions for word learning, these studies do not compare spoken AO and AV instruction directly. In addition, they omit the degrees to which oral communication programs, caregivers, and interventionists emphasize unisensory training and only broadly describe the degree of exposure to sign language in home and educational settings. Perhaps even more importantly, whereas auditory and visual stimuli from spoken words offer complementary and correlated information for the same articulatory gestures (Campbell, 2008), auditory and visual stimuli from spoken words with sign language do not (Houston, Beer, et al., 2012). Thus, these two comparisons are not equivalent. Last, these correlational studies lack sufficient evidence to answer causal questions about interventions’ active ingredients. Whether AO or AV input results in more efficient word learning for children with hearing loss cannot be discerned from the current extant evidence base. Therefore, an adapted alternating treatments single case intervention research design was employed in this investigation. This design allowed for the direct, stringent evaluation of the effectiveness and efficiency of AO versus AV intervention conditions for word learning tasks. Research Questions and Hypotheses This study directly compares the effectiveness and efficiency of an explicit receptive word learning intervention for teaching young children with hearing loss to associate pseudowords (i.e., non-real or “nonsense” words such as “weem” and “moob” that conform to the rules of English phonology) with unfamiliar objects in an AO intervention condition versus an AV intervention condition. The following two research questions were used to guide the present investigation: (A) Do children with hearing loss learn pseudowords receptively under AO and under AV intervention conditions? (B) Do children with hearing loss learn pseudowords receptively more efficiently under an AO intervention condition compared with an AV intervention condition when tested in an AO presentation format? When directly comparing the relative effectiveness and efficiency of AO versus AV intervention conditions on word learning, one must use the same testing method regardless of the words’ intervention format. Otherwise, differences in results could be attributable to different testing formats. Providing one consistent testing method inherently creates a mismatch between testing versus intervention formats of presenting the word. Arguably, an AO testing format for words taught in AO as well as AV intervention conditions (a) eliminates the possibility of participants being exposed to visual input for the AO words during testing, (b) provides an appropriately conservative test of AV input benefits, and (c) directly addresses whether AV input inhibits word learning when tested in an AO format. Based on a multisensory theory, the AV intervention condition should result in more efficient word learning. In contrast, based on a unisensory theory, the AO intervention condition should result in more efficient word learning. In addition, unisensory theory posits that the AV presentation format will inhibit the ability of a child with hearing loss to comprehend words when presented without speechreading (visual) cues. This study design allows for a direct test of these competing hypotheses. Methods Experimental Design This study employed a type of single case research design (SCRD) called an adapted alternating treatments design (AATD) to evaluate the effectiveness and compare the efficiency of AO and AV intervention conditions (Sindelar, Rosenberg, & Wilson, 1985; Wolery, Gast, & Ledford, 2014). SCRD is a type of experimental design that enables investigators to draw causal conclusions about the relation between independent and dependent variables (DVs) (Horner et al., 2005). It is ideal for low-incidence populations with notable variability across individuals, such as children with hearing loss (Horner et al., 2005; Wendel, Cawthon, Ge, & Beretvas, 2015), and has been recently highlighted as directly applicable to studying intervention practices in children with hearing loss (Cannon, Guardino, Antia, & Luckner, 2016). There are a number of designs within the overall rubric of SCRD that can be adapted to address different intervention questions. Selecting an appropriate SCRD for a specific research question is a critical decision for all SCRD studies. The AATD was selected for this study because it was specifically developed to compare two or more treatments for behaviors that are expected to remain after intervention is withdrawn (i.e., non-reversible behaviors). Word learning is considered a non-reversible behavior because children are expected to retain word knowledge after they acquire it, even if they are no longer being directly taught the information. The AATD orders conditions through rapid iterative alternation and necessitates behavioral sets (in this case word sets) of equal difficulty to draw conclusions regarding the relative efficiency of one condition compared with another. Investigators can also draw conclusions about the effectiveness of a condition by measuring performance on an untrained control set throughout the AATD study. Conclusions are drawn based on whether the participant exhibits a different pattern of acquisition for the target behavior under one condition than is observed without intervention for an equivalent set (i.e., for a control set). Specific to this study, we examined whether participants exhibited greater accuracy identifying the target pseudowords in the AO and AV sets than in the control set. Participants Participants were recruited from a specialized preschool for children with hearing loss that focuses on developing listening and speaking skills. The preschool’s teachers do not use manual communication (e.g., American Sign Language). As required by the inclusion criteria, participants achieved standard scores of at least 70 for spoken receptive vocabulary skills on the Peabody Picture Vocabulary Test, Fourth Edition (PPVT-4; Dunn & Dunn, 2007) and spoken expressive vocabulary skills on the Expressive Vocabulary Test, Second Edition (EVT-2; Williams, 2007). Participants were required to be monolingual English speakers based on caregiver report to ensure that English vocabulary deficits were not due to limited English exposure and to avoid multiple phonological systems confounding the word sets’ equal difficulty. Individuals were included in the study only if they displayed at least average nonverbal cognitive ability (i.e., standard score of at least 85 on the Kaufman Brief Intelligence Test, Second Edition, Kaufman & Kaufman, 2004 or Primary Test of Nonverbal Intelligence, Ehrler & McGhee, 2008) had a negative history for uncorrected vision impairment per caregiver report and/or medical record, and no evidence of severe motor impairment. Individuals with low nonverbal cognitive levels, uncorrected vision impairment and/or severe motor impairment may not have been able to participate in the research tasks. Teachers identified students expected to meet inclusion criteria. Study packets containing a letter describing the study and consent forms were sent to perspective participants. The recruitment process continued until four participants who met the inclusion criteria were secured. Participation in the study was also dependent on the family’s ability to commit to the intervention schedule (i.e., three times per week for several months). Institutional Review Board approval to conduct the study was granted and caregivers provided written consent for participants. Four preschool children (three males, one female) aged 4 years 4 months to 5 years 2 months with permanent hearing loss completed the study (see Table 1). Three participants were reported to be white and one to be Asian. None identified as Hispanic. For maternal education level, two reported a bachelor’s degree and two reported some college. According to the participants’ medical records, possible hearing loss was detected at birth for each participant and then confirmed via follow-up testing. All of the participants consistently wore bilateral hearing technology (i.e., at least 8 hours per day per caregiver report; Contrera et al., 2014). All children at the preschool complete listening checks every morning with the educational staff and a resident educational audiologist provides troubleshooting services and backup equipment as needed. The participants varied in their degree of hearing loss and types of hearing technology devices. Participants 1, 3, and 4 were all fit with hearing technology prior by 3 months of age. Participants 1 and 4 received bilateral cochlear implants at 12 months of age. Table 1. Participant characteristics Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Note. Age is the participant’s chronological age at the first day of the pre-baseline evaluation (years;months); Age at CI is the participant’s chronological age when his cochlear implants were activated; BCHA = bone conduction hearing aid; CI = cochlear implant; EVT = Expressive Vocabulary Test—Second Edition (Williams, 2007); HA = hearing aid; NA = not applicable; NVIQ = nonverbal intelligence quotient as measured by the Kaufman Brief Intelligence Test, Second Edition (Kaufman & Kaufman, 2004) or Primary Test of Nonverbal Intelligence (Ehrler & McGhee, 2008); PPVT = Peabody Picture Vocabulary Test—Fourth Edition (Dunn & Dunn, 2007); SS = standard score. aParticipant 2 presented with a severe mixed hearing loss rising to normal hearing sensitivity at 4,000 Hz and a moderate hearing loss from 6,000–8,000 Hz in the left ear and moderately-severe to moderate conductive hearing loss in the right ear. Table 1. Participant characteristics Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Note. Age is the participant’s chronological age at the first day of the pre-baseline evaluation (years;months); Age at CI is the participant’s chronological age when his cochlear implants were activated; BCHA = bone conduction hearing aid; CI = cochlear implant; EVT = Expressive Vocabulary Test—Second Edition (Williams, 2007); HA = hearing aid; NA = not applicable; NVIQ = nonverbal intelligence quotient as measured by the Kaufman Brief Intelligence Test, Second Edition (Kaufman & Kaufman, 2004) or Primary Test of Nonverbal Intelligence (Ehrler & McGhee, 2008); PPVT = Peabody Picture Vocabulary Test—Fourth Edition (Dunn & Dunn, 2007); SS = standard score. aParticipant 2 presented with a severe mixed hearing loss rising to normal hearing sensitivity at 4,000 Hz and a moderate hearing loss from 6,000–8,000 Hz in the left ear and moderately-severe to moderate conductive hearing loss in the right ear. Participant 2, who was diagnosed with mixed bilateral hearing loss, bilateral microtia, right-side atresia, left-side cochlear aplasia, and left-side auditory neuropathy and was adopted internationally, received amplification at 2.5 years of age. Both social and medical histories contributed to her age of amplification. Limited information is available regarding audiological and medical history prior to her adoption. The most recent audiological evaluation report in Participant 2’s medical record indicated a severe mixed hearing loss rising to normal hearing sensitivity at 4,000 Hz and a moderate hearing loss from 6,000 to 8,000 Hz in the left ear and a moderately-severe to moderate conductive hearing loss in the right ear. Aside from hearing loss, none of the participants were reported to have other disabilities or medical conditions that were expected to impact learning or adaptive skills. Rationale for Instructional Approach and Teaching Pseudowords Due to significantly higher gains reported for studies implementing only explicit rather than only implicit (i.e., incidental) vocabulary interventions, we implemented an explicit word learning intervention (Lund & Douglas, 2016; Marulis & Neuman, 2010). Relatedly, we taught nouns because there is evidence that children with and without language impairment learn nouns more efficiently than action verbs receptively in experimental contexts (Camarata & Leonard, 1985; Leonard et al., 1982; Rice, Buhr, & Nemeth, 1990; Schwartz & Leonard, 1984). We chose to use pseudowords to control the number of exposures participants received. Because we created the pseudowords specifically for this study, it is highly unlikely that the participants would have heard these pseudowords before the study or during the study from anyone other than the examiner. In addition, use of pseudowords enabled superior confidence that word sets were equivalent and independent, which were necessary for the research design (i.e., AATD). Word Set Development Because the AATD necessitates word sets of equal difficulty, three independent pseudoword sets of equal difficulty that were counterbalanced across AV, AO, and control pseudoword sets across participants were carefully developed. These pseudoword sets were developed to be taught within the intervention and to be assessed during the study probes as the outcome measure. Each set included four consonant-vowel-consonant (CVC) pseudowords that are phonotactically legal in English phonology. Word sets were balanced for phoneme audibility, phoneme visibility, and phonological neighborhood density (see Appendix A for complete sets; Stoel-Gammon, 2011; Storkel & Hoover, 2010). Phoneme visibility was categorized by place of articulation and degree of lip movement (Bernthal, Bankson, & Flipsen, 2009). Phoneme audibility was categorized by relative amplitude and acoustic frequency. Relatively low amplitude, higher frequency sounds were classified as low audibility (Ohde & Sharf, 1992). Because vowels demonstrate relatively high audibility, they were distributed by visibility. As shown in Appendix A, each word set contains one high visibility–high audibility, one low visibility–high audibility, one high visibility–low audibility, and one low visibility–low audibility word. The first author randomly assigned unfamiliar object referents to each pseudoword for each participant and confirmed that the participants did not have a “name” for any of the objects prior to intervention. All objects named by a participant during a screening probe were not used for that participant. See Appendix B for example objects. Procedures The first author was the examiner for assessments and intervention. She is a certified, licensed speech-language pathologist who was unfamiliar to the participants prior to the study. The examiner recorded the room’s noise level using the SPLnFTT Noise Meter app on an iPhone 4 S before each session. This system was calibrated with a dosimeter. The mean sound levels for the two rooms used across all sessions were 42.7 dB(A) and 40.0 dB(A), which provides evidence that the sessions were completed in quiet rooms with limited background noise. All sessions were video-recorded using a Sony Professional camcorder for data collection purposes and to monitor procedural fidelity. The examiner used a speech hoop (10.5” diameter) to prohibit access to speechreading cues without distorting the acoustic signal during listening checks, the DV probes, and the AO intervention condition. Pre-baseline evaluation Participants completed assessments of language, speech production, auditory discrimination, and audiovisual integration for descriptive purposes during the pre-baseline evaluation (i.e., preliminary testing). This descriptive information provides guidance on to whom the study results may apply (i.e., external validity) by enabling readers to compare individual children with the participants of the current study (see Table 2 for results). The pre-baseline evaluation included two measures of audiovisual integration—mulitmodal word recognition and McGurk tasks. In the multimodal word recognition task, participants repeated 24 recorded tri-phonemic, monosyllabic nouns from the Hoosier Audiovisual Multi-talker Database with a 0 dB signal-to-noise ratio in AV, AO, and VO conditions to yield an AV gain score (Sheffert, Lachs, & Hernandez, 1996; Stevenson et al., 2015). A total of five lists, each containing 24 real words, were used across the participants. Words lists were used across multiple conditions across participants. See Appendix C for an example word list. Stimuli were presented using MATLAB 2012b software. All participants completed the multimodal word recognition task in all conditions. As shown in Table 2, participants exhibited a wide range of benefit from AV input relative to unisensory input in the word recognition task (range: 8–50%). Table 2. Descriptive information for participants Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Note. Arizona-3 = Arizona Articulation Proficiency Scale—Third Revision (Fudala, 2000); AV Gain = gain in percent accuracy identifying words in the audiovisual (AV) condition relative to the child’s predicted AV percent accuracy based on his or her unisensory performance calculated as [AV accuracy] – [(auditory-only accuracy + visual-only accuracy) – (auditory-only accuracy × visual-only accuracy)] based on Stevenson et al. (2015); BCHA = bone conduction hearing aid; CI = cochlear implant; DNC = did not complete; HA = hearing aid; McGurk = percentage of perceived McGurk illusions relative to unisensory performance (Stevenson, Zemtsov, & Wallace, 2012); SS = standard score; TACL = Test of Auditory Comprehension of Language (Carrow-Woolfolk & Allen, 2014a); TAPS-3 WD = Test of Auditory Processing Skills—Third Edition Word Discrimination subtest (scaled score; Martin & Brownell, 2005); TEXL = Test of Expressive Language (Carrow-Woolfolk & Allen, 2014b). Table 2. Descriptive information for participants Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Note. Arizona-3 = Arizona Articulation Proficiency Scale—Third Revision (Fudala, 2000); AV Gain = gain in percent accuracy identifying words in the audiovisual (AV) condition relative to the child’s predicted AV percent accuracy based on his or her unisensory performance calculated as [AV accuracy] – [(auditory-only accuracy + visual-only accuracy) – (auditory-only accuracy × visual-only accuracy)] based on Stevenson et al. (2015); BCHA = bone conduction hearing aid; CI = cochlear implant; DNC = did not complete; HA = hearing aid; McGurk = percentage of perceived McGurk illusions relative to unisensory performance (Stevenson, Zemtsov, & Wallace, 2012); SS = standard score; TACL = Test of Auditory Comprehension of Language (Carrow-Woolfolk & Allen, 2014a); TAPS-3 WD = Test of Auditory Processing Skills—Third Edition Word Discrimination subtest (scaled score; Martin & Brownell, 2005); TEXL = Test of Expressive Language (Carrow-Woolfolk & Allen, 2014b). For the McGurk task, two participants demonstrated sufficiently accurate and reliable responses during the practice trials and unisensory conditions (i.e., AO and VO) to complete the task. For the multisensory trials, participants were shown dubbed videos of a woman saying, “Ba,” or “Ga,” with congruent or incongruent AV stimuli (McGurk & MacDonald, 1976). The McGurk incongruent (illusion) trials presented a video of a woman saying, “Ga,” with an audio recording of “Ba.” Across all conditions, participants reported the syllable they perceived by pressing a computer key (e.g., “B” for “Ba;” Baum, Stevenson, & Wallace, 2015). The strength of the McGurk effect was calculated as the percentage of perceived illusions relative to unisensory baseline performance (p(AV McGurk) × [1−p(Unisensory/dɑ/); Stevenson, Zemtsov, & Wallace, 2012). Because no participants reported perceiving “da” in the unisensory condition, the results are equivalent to those that do not take into account unisensory performance. Participant 1 pressed “D” on one of the 20 multisensory incongruent trials resulting in a score of .04 (scale 0 to 1), but he spontaneously and immediately stated that he made a mistake. Participant 2 did not report perceiving the McGurk illusion. Because age is a known factor in the likelihood of individuals perceiving the McGurk illusion, age may at least partially explain the participants’ performance (Hillock, 2010; Massaro, 1984; Massaro, Thompson, Barron, & Laren, 1986; McGurk & MacDonald, 1976). Older children are more likely to perceive the illusion than younger children (Hillock, 2010). Nonetheless, some preschool children have been reported to perceive the McGurk effect (e.g., McGurk & MacDonald, 1976). DV probe All probes were administered without visual cues because one must use the same testing format regardless of the pseudowords’ intervention condition when comparing performance across intervention conditions. During DV probes, participants selected unfamiliar objects corresponding to the target pseudowords within a given set (e.g., AV, AO, or control). Objects in the DV probe for a given set included unfamiliar objects assigned to the four target pseudowords, a known object (e.g., ball), and two foils (i.e., unfamiliar objects never taught). The DV is the percent accuracy (i.e., correct responses/total trials × 100) for identifying target words receptively within 5 s in a field of seven objects in an AO probe (i.e., examiner uses a speech hoop). The examiner sat across the table from the participant and placed the seven objects in the randomly drawn order on the evenly spaced stickers across the table. The examiner covered her mouth with a speech hoop throughout the probe. She said, “Give me the (target word),” with an upturned palm for each target pseudoword in a predetermined random order. The examiner did not provide repetitions of target pseudowords, even if the participant requested one, or any feedback regarding response accuracy, except for the one known object per set. To maintain engagement and effort, the examiner was permitted to provide positive reinforcement for identifying the known object for each group and for behaviors not targeted in the intervention (e.g., remaining seated and waiting quietly for the next direction; Wolery et al., 2014). Note that the probe was receptive only; participants were not asked to label objects. All data were entered twice into an electronic spreadsheet and initial data entry errors were corrected to ensure data entry accuracy. Further details about the DV probe are available in the study’s procedure manual, which is available from the first author upon request. DV probes were approximately 2–10 min depending on whether the control set was assessed that session and the participant’s response speed. AO and AV sets were administered each session. The control set was administered every two to three sessions to provide sufficient data for interpreting the results without conducting excessive assessments that might reduce the saliency of the learning task. Baseline phase The baseline phase occurred prior to teaching the participants any of the target pseudowords. Baseline sessions occurred three times per week until the participant demonstrated a stable pattern of response (i.e., low variability in accuracy level with either a decreasing or flat trend), with a minimum of three sessions (Wolery et al., 2014). In baseline sessions, participants completed a listening check (i.e., participant repeated the Ling 6 sounds produced by the examiner to assess whether his or her hearing devices were working properly; Ling, 1989; Tye-Murray, 2014), followed by the DV probe. Because the targets were constructed specifically for this study, any correct responses prior to intervention must have arisen from chance. Comparison phase The receptive word learning intervention was provided only during the comparison phase. Comparison phase sessions occurred three times per week and began with a listening check, followed by the DV probe, and then the AO and AV intervention conditions. The order of the AO and AV intervention conditions was alternated across sessions and counterbalanced across participants. In each session, the second condition occurred immediately after the first. Participants were taught to associate pseudowords with unfamiliar objects using a procedure adapted from Leonard et al. (1982). The independent variable that differentiated the AO and AV intervention conditions was the presence or absence of access to speechreading cues (i.e., visual stimuli). All other potentially influential variables were held constant. The examiner never taught any of the pseudowords in the control set. The control set pseudowords were only included during probes at the scheduled times. For the intervention, the examiner introduced each target object one at a time, while engaging with the participant with the selected theme box (see Appendix D). These theme boxes were used to facilitate active manipulation of the unfamiliar objects, maintain the participants’ interest, and more closely approximate how children interact with unfamiliar items in therapeutic and community settings. The theme box items were not used as unfamiliar target objects. Instead, the examiner introduced unfamiliar objects one at a time while the participant engaged with the theme box. The same theme box was used during the AO and AV intervention conditions for the session to maintain consistency of the intervention across conditions. The examiner completed three key intervention components for each of the four target objects: (a) eight labeling exposures, (b) two elicited labels, and (c) two opportunities to identify target objects. For each labeling exposure, the examiner secured the participant’s eye contact, named the unfamiliar object in the sentence final position (e.g., “Here’s the ____.”), and produced a show gesture with the target object toward the participant. At least one utterance or play action by the examiner or participant occurred between each exposure. To maintain uniformity across conditions, the examiner used one hand for the show gesture because she continuously held the speech hoop with one hand in the AO intervention condition. For the elicited labels, the examiner asked the participants to say each target pseudoword two times per session (e.g., “What’s this?” while holding the target object). Participants were not required to say each target pseudoword. If the participant declined to say the target pseudoword, the examiner proceeded with the intervention procedure. The examiner repeated these steps for all four target objects in the set in a predetermined, random order. At the end of each intervention condition, the examiner provided two opportunities for the participant to identify each target object. She asked the participant to find each target object twice (e.g., “Where’s the ____?” or “Find the ____.”) with all target objects and a standard collection of four additional unfamiliar objects (i.e., not target objects) present and provided feedback on the participant’s accuracy. After a participant reached criterion of 75% accuracy across three consecutive sessions for the AO or AV pseudoword set in the DV probe, instruction was provided only in the unmastered condition. Intervention ceased for the mastered condition. The comparison phase ended when both conditions reached the predetermined criterion. Maintenance phase Maintenance data were collected using baseline procedures 1 week and 2 weeks after the last comparison phase session. Maintenance DV probes were administered in the same manner as those in the baseline phase. Procedural fidelity Prior to initiating intervention, the examiner achieved procedural fidelity of at least 90% accuracy across two consecutive sessions with young children not participating in the study. Throughout the study, a graduate research assistant (RA) coded at least 25% of sessions across each phase and condition for each participant. All sessions were video-recorded, and then the graduate RA randomly selected sessions via a random number generator. The examiner was blind to which sessions were going to be selected for procedural fidelity coding. Using direct observation of video-recorded sessions, the RA collected procedural fidelity data for the independent variables and numerous control variables (i.e., behaviors that should remain constant across conditions; Gast, 2014) on an electronic spreadsheet. Procedural fidelity behaviors for probes were consistent across all phases. Only the independent variable (i.e., presence or absence of the speech hoop) varied across the AO and AV intervention conditions’ procedural fidelity behaviors. Complete descriptions are available in the study’s procedure manual available upon request from the first author. Procedural fidelity was analyzed formatively at the behavior level. Across all participants, the examiner demonstrated a mean of 99.8% (range: 93–100%) accuracy for administering probes and 98.4% (range: 90–100%) and 98.6% (range: 94–100%) accuracy for AO and AV intervention procedures, respectively. As an additional evaluation of consistency across conditions and to ensure no systematic bias in salience of pseudoword presentation, the duration of target pseudowords in each condition during DV probes and AO and AV intervention conditions was calculated using Audacity 2.1.2 software for two to three randomly selected sessions per participant (at least 12.5% of each participant’s sessions). The duration of pseudowords was very similar across conditions for each participant during probes and intervention. The mean duration for target pseudowords in the AO and AV DV probes was .71 s (SD = .031 s) and .72 s (SD = .027 s), respectively. The mean duration for target pseudowords in the AO and AV intervention conditions was .59 s (SD = .025 s) and .59 s (SD = .031 s), respectively. Interobserver Agreement A secondary coder (e.g., graduate RA) independently scored the DV probe for at least 25% of sessions in each condition for each participant for interobserver agreement (IOA). The primary coder was blind to which sessions were to be coded for IOA. The secondary coder was trained to .90 point-by-point agreement with the primary coder for independent coding of the participant’s accuracy responding to the DV probes for three consecutive sessions via video recordings of non-participants (Ayres & Ledford, 2014). Primary and secondary coders completed regular discrepancy discussions. Overall mean IOA was .99 (range: .90–1.00). The mean IOA was .99 (range: .97–1.00) for the baseline phase, .99 (range: .90–1.00) for the comparison phase, and .99 (range: .96–1.00) for the maintenance phase. These IOA are high and well above the recommended standard (i.e., .80; Horner et al., 2005). Analysis Approach Results were interpreted via visual analysis (visual inspection), the primary method for analyzing single case research data (Gast & Spriggs, 2014). As described in the What Works Clearinghouse (WWC) Procedures and Standards Handbook, Version 3.0, multiple features were examined during visual analysis to determine the presence of a functional relation between the independent variable and the DV: level, trend, variability, overlap, and consistency of data in similar phases (What Works Clearinghouse, 2014). Although WWC does not provide explicit standards for the AATD, principles from other relevant SCRDs and guidelines from experts (e.g., Ledford & Gast, 2018) were applied. We used a mastery criterion of at least 75% accuracy across three consecutive sessions for each pseudoword set. Because we used pseudowords, the participants cannot learn the pseudowords outside of intervention. Thus, participants were only expected to learn pseudowords in the AV and AO sets, not the control set. Results Effectiveness of Audiovisual and Auditory-Only Intervention Conditions (Research Question A) Evidence for the effectiveness of the AO and AV intervention conditions is drawn primarily from comparing the level and trend of the AO and AV intervention conditions to the control (no-teaching) condition in the comparison phase. As seen in Figures 1–4, data from Participants 1, 2, and 4 achieved the mastery criterion (i.e., at least 75% accuracy across three consecutive sessions) for the AO and AV sets and show a clear differentiation in level and trend for the AO and AV sets from the control set. Data from these three participants show higher accuracy and increasing trends for the AO and AV sets as compared with lower accuracy and a flat (i.e., not increasing) trend for the control set. These data indicate strong evidence of a functional (causal) relation between the AO and AV interventions and increased identification of target pseudowords receptively, as predicted. Data from Participant 3 exhibit moderate evidence of a functional relation between the AO and AV interventions and accuracy identifying target pseudowords. Although there is a small difference in level between the AO and AV sets relative to the control set, there is a flat rather than an increasing trend across the comparison phase for the AO and AV sets indicating a consistent but relatively weaker functional relation as compared to the other participants. Participants 1, 2, and 4 all reached mastery criterion (i.e., 75% accuracy identifying target pseudowords across three consecutive sessions) for the AO and AV sets. Nonetheless, they differed substantially in the number of sessions required to reach mastery (range: 4–18 sessions) indicating variation in learning rates across participants. Participants 1 and 4 exhibited rapid increases in accuracy with limited variability. Participant 2 required more than twice as many sessions to reach mastery and presented with substantial session-to-session variability. Maintenance data provide information about the degree to which participants were able to identify taught pseudowords 1 and 2 weeks after intervention ended without additional instruction (i.e., short-term retention). Maintenance data across participants indicate moderate evidence of retention for taught pseudowords based on differentiation between the AV and AO sets from the control set during the maintenance phase. Participant 1 At baseline, Participant 1 exhibited low accuracy identifying pseudowords for the AO and AV sets as shown in Figure 1. Although he achieved 50% accuracy for the control set at Session 1, the decreasing, counter-therapeutic trend (i.e., moving in the opposite direction as expected to occur with the onset of intervention) strongly suggests those results were due to chance. That is, the initial variation in performance is likely the result of random responses, which is not uncommon in studies involving comprehension of pseudowords. Participant 1 demonstrated a shift in level after the first intervention session for AO and AV sets while the control set continued to exhibit a flat slope. Data for the AO and AV sets show an accelerating trend with a moderate, positive slope. Participant 1’s accuracy declined from the prior session on only one session. Accuracy for the control set remained at 0–25% accuracy. Even though the target pseudowords were randomly paired with unfamiliar objects, participants were expected to identify some objects accurately because they were identifying objects from a closed set. It is important to recall that because there were four trials per set, participants could only achieve scores of 0%, 25%, 50%, 75%, and 100% accuracy. There is a difference in level and trend between the AO and AV sets and the control set during the comparison phase, indicating a functional relation between the AO and AV interventions and increased identification of target pseudowords receptively, as predicted. Participant 1’s accuracy for the AO and AV sets exceeded the control set across the maintenance period, which spanned four weeks. Participant 2 As shown in Figure 2, Participant 2 demonstrated low, stable performance in the baseline phase. During the comparison phase, she exhibited substantial variability in accuracy across sessions. Participant 2 remained seated and responded at an appropriate pace without overt signs of inattention or impulsivity during the DV probes. She was observed to label some items correctly spontaneously as the examiner set up the DV probe, but then respond incorrectly when asked to give them to the examiner. This behavior and Participant’s 2 observed variability highlight the need to consider performance and measurement error when interpreting assessment results of young children. Participant 2 achieved 50% accuracy twice for the control set; however, accuracy fell to 0% accuracy on the next session each time. Thus, chance rather than learning is the most likely explanation for this control set performance. Nonetheless, out of 10 sessions, her performance for the AO and AV sets exceeds that for the control set in eight and seven sessions, respectively. Further, a clear differentiation in level and trend from the control set is evident beginning at Session 18 for the AO set and Session 17 for the AV set. Participant 2 achieved the mastery criterion in 18 comparison phase sessions for the AO and AV sets. These data indicate a functional relation between the AO and AV interventions and increased accuracy identifying target pseudowords receptively, as predicted. During the maintenance phase, Participant 2 achieved 25–75% accuracy for AO and AV sets and 0% accuracy for the control set, which provides weak evidence of retention. Participant 3 During the baseline phase, Participant 3 achieved 0% to 25% accuracy for the AO and control sets, as shown in Figure 3. Because he achieved 50% accuracy for the AV set for Session 3, the baseline phase was extended by one session during which his performance returned to 0% accuracy. Participant 3, the youngest participant, presented with notably greater need for redirection and more direct requests for his attention prior to administering exposures during intervention than the other participants. Although accuracy for the AO and AV sets exceeds the control set for most sessions in the comparison phase, the trend is flat rather than increasing. Because of this trend, intervention was discontinued and no maintenance data were collected. Thus, Participant 3’s data provide moderate evidence of a functional relation between the AO and AV interventions and the DV. Participant 4 As shown in Figure 4, Participant 4 demonstrated low, stable performance in the baseline phase. During the comparison phase, he exhibited a shift in level after three and one intervention sessions for the AO and AV sets, respectively. Data for the AO and AV sets show an accelerating trend with moderate slope and minimal variability. His accuracy only declined on one session from the immediately preceding session. Accuracy for the control set remained at 0–25%. The clear difference in level and trend between the control set and the AO and AV sets indicates a functional relation between the AO and AV interventions and increased identification of target pseudowords receptively. Participant 4 achieved 75–100% accuracy in the maintenance phase for AO and AV sets relative to the control set at 0–25% accuracy, indicating strong retention. In summary, three participants provide strong evidence in support of the hypothesis that participants would learn pseudowords under AO and AV intervention conditions. The fourth participant (i.e., Participant 3) provided moderate evidence in support of this hypothesis. Efficiency of Audiovisual and Auditory-Only Intervention Conditions (Research Question B) None of the participants exhibited a differential rate of word learning in the AO versus AV intervention conditions as shown in Figures 1–4. Because the participants achieved the mastery criterion close in time across intervention conditions and exhibited similar trends with overlapping data points between AO and AV sets, their data do not provide evidence of superior efficiency for either condition. The data are consistent with the null hypothesis of approximately equal word learning efficiency in the AO and AV intervention conditions. There was no inhibition evident when providing access to visual cues alongside auditory input nor was there differential benefit for the AV condition on accuracy identifying target pseudowords presented without access to visual cues. Participants achieved the mastery criterion for AO and AV sets no more than two sessions apart and all exhibited overlapping data points. Thus, neither the hypothesis asserted by the multisensory theory nor the hypothesis asserted by the unisensory theory was supported. Figure 1. View largeDownload slide Participant 1. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Sessions 1–12 were completed three times per week. Sessions 13–15 were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 1. View largeDownload slide Participant 1. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Sessions 1–12 were completed three times per week. Sessions 13–15 were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 2. View largeDownload slide Participant 2. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 2. View largeDownload slide Participant 2. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 3. View largeDownload slide Participant 3. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. The horizontal dotted line denotes the 75% criterion level. Figure 3. View largeDownload slide Participant 3. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. The horizontal dotted line denotes the 75% criterion level. Figure 4. View largeDownload slide Participant 4. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 4. View largeDownload slide Participant 4. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Discussion Effectiveness of Audiovisual and Auditory-Only Intervention Conditions (Research Question A) The replicated effect for the effectiveness of the AO and AV intervention conditions relative to the control set indicates that participants learned taught pseudowords with and without access to visual cues and did not learn pseudowords not taught (i.e., control set pseudowords). No probable threats to internal validity that would explain the results were identified. Thus, learning can be reasonably attributed to the intervention. Results yield strong evidence that three of the four participants were able to pair pseudowords and unfamiliar objects following direct instruction for the objects’ labels. Our findings are consistent with our hypothesis and prior findings indicating that children with hearing loss can learn words with explicit instruction (e.g., Houston et al., 2005; Lund & Douglas, 2016; Willstedt-Svensson, Löfqvist, Almqvist, & Sahlén, 2004). The participants’ performance patterns highlight the variability across children with hearing loss, even when they share a number of characteristics (e.g., prelingual hearing loss, consistent hearing technology use, and same specialized preschool). Variation within and across young children in their responses to interventions ought to be expected and addressed clinically through data-informed decisions in special education because demographic and common child characteristics may not readily account for these variations (Ledford et al., 2016). In future, studies factors that explain a substantial portion of the variance in response may be identified. Efficiency of Audiovisual and Auditory-Only Intervention Conditions (Research Question B) None of the participants exhibited a differential effect between AO and AV sets for word learning efficiency during probes administered without access to visual cues. The three participants who reached mastery criterion did so for both intervention sets. Further, they achieved the mastery criterion no more than two sessions apart and exhibited similar trends in the comparison phase for each set. Therefore, neither the pattern predicted by the unisensory theory nor the one predicted by the multisensory theory was observed. Recall that all pseudowords were tested without visual cues to provide equivalent testing conditions even though they were taught under different conditions. Although the teaching and testing procedures align for the AO intervention condition, for the AV intervention condition participants were taught with access to speechreading cues and tested without access to speechreading cues. Blocking visual cues during the probes eliminated the possibility of participants being exposed to visual input for the AO pseudowords during testing, provided an appropriately conservative test of AV input benefits, and directly addressed whether AV input inhibits word learning for identifying words when speechreading cues are not available. The AO intervention condition can be viewed as unusual and “unnatural” from a social-pragmatic perspective (see Rhoades et al., 2016). Thus, strong positive evidence is needed to support its implementation. That is, one could argue that the AO intervention condition be recommended only if the socially and pragmatically appropriate AV intervention condition is inhibitory. The results of this study, and the extant literature thus far, do not provide such evidence. Additionally, data from the participants did not indicate a benefit of AV input for efficiency of word learning as would be expected by the multisensory position. The results could be partially related to task difficulty level. To optimize the learning conditions for an arguably fair comparison between the AO and AV intervention conditions, the intervention sessions were conducted in a quiet room with minimal background noise. That is, we chose to ensure optimal access to the auditory signal to test the AO intervention condition under ideal listening conditions. Finally, only the examiner and participant were present to simulate a common therapy session scenario. Indeed, the extant literature on speech perception clearly indicates that AV input results in greater benefit over AO input at poorer signal-to-noise conditions (e.g., Sumby & Pollack, 1954) than were employed herein. Participants may exhibit a differential response in conditions with a poorer signal-to-noise ratio, such as classroom, home, or other less controlled natural language-learning settings wherein auditory input is attenuated. Limitations Before discussing the implications of this study and future directions, several limitations should be acknowledged. First, because the examiner conducted both the assessment and intervention portions of the study, she was not blind to condition when scoring. This scenario can result in detection bias (i.e., systematic differences between conditions in determining outcomes; Cochrane, 2017). Having separate, blinded examiners for the assessment and intervention were not feasible for this study, but should be considered in future studies. In the present study, the high point-by-point IOA reduces the likelihood of detection bias being present. Second, the study included only four children and the external validity of the results is limited by the characteristics of those four participants. The same results may not be found for all children with hearing loss. It is unknown whether the same results would be found for children at other ages, those with higher or lower language levels, or those with different hearing histories, including children who have had access to sound for shorter periods of time. The specific procedures are expected to require adaptations for children at different ages and/or language levels. Even though study procedures were designed to be a salient, simple word learning paradigm, the difficulty level was beyond the current abilities of one participant. This study focused on fast mapping (i.e., the ability to link a word to its referent after only a few exposures, Carey & Bartlett, 1978) and, to a limited degree, retention for word learning. It did not address word extension or other aspects of semantic knowledge, which are other components of vocabulary knowledge (Walker & McGregor, 2013). In addition, we only tested receptive, not expressive, performance in the word learning task. These aspects of word learning should be addressed in future studies, particularly in light of the identified weaknesses in lexical organization, including taxonomic, semantic, and phonological organization, in children with hearing loss (Lund & Dinsmoor, 2016; Wechsler-Kashi, Schwartz, & Cleary, 2014). It is important to test whether tasks requiring more complex semantic knowledge (e.g., relations between words or more abstract concepts) or producing words independently will reveal differences in learning efficiency under AO versus AV interventions. Strengths Five strengths should be acknowledged. First, the use of pseudowords enabled tight control of the number of exposures to the target words and strict balancing of the acoustic features of the words across the AO and AV intervention conditions. When using real words, it is impossible to control for the number of prior exposures participants have had with a given word and for partial understanding of the words, even for words they do not identify or use correctly. Use of pseudowords ameliorates this concern. In addition, neither caregivers nor teachers were aware of which words participants were being taught. Second, numerous steps were taken, including use of pseudowords, to ensure equal difficulty across word sets, which is a critical characteristic for AATD studies. No evidence of unbalanced sets was identified. Third, participants’ ability to identify target words were tested at the beginning of each session rather than the end to provide a more distal measure of word learning that more closely approximates what children must do to build their vocabulary skills. Being able to identify a word immediately after instruction but not later in the day or week offers little value to children in their everyday settings. Additionally, the target pseudowords were taught until the participant reached a predetermined criterion for three consecutive days, rather than conducting a set number of sessions, which may more closely align with the goal structure in educational and therapeutic settings at least for some children with hearing loss. Fourth, including a control set, which is not required with a AATD study, permitted evaluation of the effectiveness of the AO and AV intervention conditions and increased the internal validity of the study design. Fifth, high-procedural fidelity across all participants and phases supports the study’s internal validity. Theoretical and Clinical Implications Our findings do not support a unisensory (auditory-only) theoretical, but heretofore untested, assertion that visual access provided alongside auditory input differentially inhibits auditory processing in children with hearing loss for a word learning task compared with conditions that isolate the auditory sense (Pollack, 1970). We did not observe any detrimental effects of permitting simultaneous visual access to speechreading cues during the AV intervention condition on the participants’ performance even when the probe blocked access to these visual cues. This theoretical implication ought to be further explored in future studies and applied educationally if replicated and extended across learning conditions. Importantly, the results do not provide evidence supporting the use of eliminating visual access to the speaker’s mouth and throat when teaching spoken words to children with hearing loss to increase the efficiency of word learning. Participants demonstrated the ability to identify taught pseudowords in assessments that blocked access to speechreading cues regardless of whether they were permitted access to these cues during teaching sessions. The “hand cue” may be used to block access to visual speechreading cues. When using the hand cue, the speaker covers his or her mouth while speaking to children with hearing loss in an attempt to focus the child’s attention and skills on auditory cues rather than visual cues by physically restricting access to seeing the teacher’s mouth (Estabrooks, 1998). Our findings may be added to the list of reasons Rhoades et al. (2016) expressed for explaining why use of “the hand cue (or any substitute for it) is no longer recommended by AVT [Auditory-Verbal Therapy]” (p. 287). Although Rhoades et al. (2016) are appropriately critical of the lack of evidence supporting the use of the hand cue, they rely on indirect evidence from studies of children and adults with typical hearing and studies using speech recognition or non-language tasks, rather than language-learning tasks (Rhoades et al., 2016). The direct evidence from the current study bolsters their recommendation to eliminate use of the hand cue. Even though we did not use the hand cue in the current study, the use of a speech hoop should be viewed as a “substitute for it.” Like the hand cue, use of the speech hoop is intended to block access to speechreading cues and focus the child’s attention on the auditory stimuli alone. The current study’s evidence could also apply more broadly to other practices of restricting or eliminating access to visual cues. Note that the recommendation to refrain from prohibiting access to speechreading cues applies to intervention, not assessment. Evaluating the degree to which a child can perform a given task with versus without access to speechreading cues can provide clinically valuable information for treatment planning, clinical recommendations, and determining candidacy for hearing technology include cochlear implantation and assistive listening devices such as digital modulation (DM) or frequency modulation (FM) systems. In addition, the current study did not identify the beneficial effect of access to speechreading cues predicted by the multisensory position. It is possible that children with hearing loss would exhibit more rapid word learning in an AV intervention condition relative to an AO intervention condition if intervention occurred in a room with a poorer signal-to-noise ratio, similar to the acoustic environment of many classrooms. Nonetheless, one must consider very carefully whether prohibiting access to speechreading cues during intervention is most beneficial in light of the current evidence. With adequate support through collaboration with researchers and professional development, clinicians could use SCRD principles when collecting data in varying conditions to determine whether a given child demonstrates a differential response to either AO or AV input for a particular skill. Future Directions This study provides evidence of successful word learning in AO and AV intervention conditions for preschool children with hearing loss. Continued work is needed to determine whether one or both of these conditions results in greater learning efficiency for certain tasks for at least some children with hearing loss. Most immediately, future studies similar to the current study should be conducted (a) with words taught under a poorer signal-to-noise ratio that simulates the acoustic environment of a typical classroom but tested in a quiet environment, (b) with learning tasks of varying difficulty levels, (c) with learning contexts that more closely approximate classroom instruction, and (d) with participants of different chronological ages and language levels. To understand word learning skills of younger children with hearing loss and those with lower language levels under AO and AV intervention conditions, less demanding tasks and more proximal measures should be considered. These studies could contribute to identifying the active ingredients in word learning interventions for children with hearing loss. Future studies will need to address whether and how pretreatment characteristics (e.g., age, language abilities, audiological history and skills, and multisensory processing skills) influence differential responses to AO versus AV intervention conditions to individualize intervention. Using pretreatment characteristics offers the potential to match individuals efficiently and effectively with specific interventions or intervention features. Differences in auditory access, time post cochlear implantation, and length and/or degree of auditory deprivation might all influence an individual’s response to AO versus AV input. Individual differences in multisensory processing abilities have been documented, but the mechanisms responsible for such differences are not yet well understood (Murray et al., 2016; Stevenson et al., 2017). Measures of multisensory integration may provide an avenue for capturing differences in how children with hearing loss respond to AO versus AV intervention conditions proximally and over time. Use of multisensory integration measures will require (a) continued research of multisensory integration development in children with and without hearing loss, (b) thorough evaluations of the reliability and construct validity of specific tasks, and (c) development of tasks appropriate for individuals with a wide variety of language and developmental levels. Of note, in the current study all participants completed the multimodal word recognition task and two of four completed the McGurk task with little to no instruction. The current study is not designed to be able to explain variation in performance on multisenseory assessments. In the future, large randomized controlled trials could evaluate differences in behavioral (e.g., performance on learning tasks) and neural outcomes (e.g., degree of enhancement with multisensory versus unisensory stimuli) of young children who receive intervention in either AO or AV presentation formats. Such studies could evaluate moderated differences in AO and AV intervention conditions based on pretreatment characteristics. Conclusion This study provides an early step in testing two competing viewpoints on how to develop spoken language and multisensory processing skills in children with hearing loss. It is the first known study to provide direct evidence for the effectiveness and efficiency of word learning instruction in AO and AV intervention conditions for children with hearing loss. Perhaps the most striking finding is that even under optimal listening conditions, there was no inhibitory effect of access to visual speechreading cues on word learning. This finding was seen even though word learning was assessed in an AO presentation format, an optimal test of whether auditory learning had occurred. Following instruction in AO and AV intervention conditions, participants accurately identified taught pseudowords during AO probes with similar rates of learning. Therefore, these findings do not support the widespread practice of prohibiting access to visual speechreading cues in order to isolate auditory input to increase the rate of word learning, even when such learning is subsequently assessed without access to visual cues. Future research is required to replicate the findings and to determine whether the same pattern of results is observed for children with hearing loss with different audiological and language profiles, for tasks of varying difficulty levels, and across different timespans. Notes J.M., MS, PhD student, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN; S.C., PhD, Professor, Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN; P.Y., PhD, Professor, Department of Special Education, Vanderbilt University, Nashville, TN. J.M. conceived of the study, completed data collection, participated in the interpretation of the data, and drafted the manuscript; S.C. participated in the study design and data collection, helped interpret the data, and helped draft the manuscript; P.Y. participated in the design of the study, interpretation of the data, and drafting the manuscript. All authors read and approved the final manuscript. Funding This work was supported by the American Speech-Language-Hearing Foundation [2016 Student Research Grant in Early Childhood Language Development]; the National Center for Advancing Translational Sciences of the National Institute of Health [UL1 TR000445]; the United States Department of Education [Preparation of Leadership Personnel grant H325D140087]; the Eunice Kennedy Shriver National Institute of Child Health and Human Development of the National Institutes of Health [U54HD083211]; and the Scottish Rite Foundation of Nashville. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the funding agencies. Conflict of Interest The authors have no conflicts of interest to disclose. Acknowledgments We thank the families who participated in our study and the teachers who collaborated with us to make this study possible. We thank Iliza Butera for her assistance with the multisensory tasks and Michael Douglas for sharing his expertise in providing speech, language, and educational services to children with hearing loss. We also gratefully acknowledge René Gifford for sharing her expertise in pediatric cochlear implantation and for providing comments on an earlier draft of this manuscript, Blair Lloyd for sharing her expertise in SCRD methods, and Melody Sun for serving as a research assistant. References Anderson , K. L. ( 2015 ). Access is the issue, not hearing loss: New policy clarification requires schools to ensure effective communication access . SIG 9 Perspectives on Hearing and Hearing Disorders in Childhood , 25 , 24 – 36 . doi:10.1044/hhdc25.1.24 . Google Scholar CrossRef Search ADS Ayres , K. , & Ledford , J. R. ( 2014 ). Dependent measures and measurement systems. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: applications in special education and behavioral sciences ( 2nd ed. , pp. 124 – 153 ). New York, NY : Routledge . Baum , S. H. , Stevenson , R. A. , & Wallace , M. T. ( 2015 ). Testing sensory and multisensory function in children with autism spectrum disorder . Journal of Visualized Experiments , e52677 . doi:10.3791/52677 . Bergeson , T. R. , Pisoni , D. B. , & Davis , R. A. O. ( 2005 ). Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants . Ear and Hearing , 26 , 149 – 164 . doi:10.1097/00003446-200504000-00004 . Google Scholar CrossRef Search ADS Bergeson , T. , Pisoni , D. B. , & Davis , R. A. ( 2003 ). A longitudinal study of audiovisual speech perception by children with hearing loss who have cochlear implants . Volta Review , 103 , 347 – 370 . Bergeson , T. , & Pisoni , D. ( 2004 ). Audiovisual speech perception in deaf adults and children following cochlear implantation. In Calvert , G. , Spence , C. , & Stein , B. E. (Eds.) , The handbook of multisensory processes (pp. 749 – 772 ). Cambridge, MA : MIT Press . Bernstein , L. E. , Eberhardt , S. P. , & Auer , E. T. , Jr ( 2014 ). Audiovisual spoken word training can promote or impede auditory-only perceptual learning: Prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults . Frontiers in Psychology , 5 , 934 . doi:10.3389/fpsyg.2014.00934 . Google Scholar CrossRef Search ADS Bernthal , J. E. , Bankson , N. W. , & Flipsen , P. , Jr ( 2009 ). Articulation and phonological disorders: Speech sound disorders in children ( 6th ed. ). Boston, MA : Pearson . Camarata , S. M. , & Leonard , L. B. ( 1985 ). Young children pronounce nouns more accurately than verbs: Evidence for a semantic-phonological interaction . Papers and Reports on Child Language Development , 24 , 38 – 45 . Campbell , R. ( 2008 ). The processing of audio-visual speech: Empirical and neural bases . Philosophical Transactions of the Royal Society of London B: Biological Sciences , 363 , 1001 – 1010 . doi:10.1098/rstb.2007.2155 . Google Scholar CrossRef Search ADS Cannon , J. E. , Guardino , C. , Antia , S. D. , & Luckner , J. L. ( 2016 ). Single-case design research: Building the evidence-base in the field of education of deaf and hard of hearing students . American Annals of the Deaf , 160 , 440 – 452 . doi:10.1353/aad.2016.0007 . Google Scholar CrossRef Search ADS Carey , S. , & Bartlett , E. ( 1978 ). Acquiring a single new word . Papers and Reports on Child Language Development at Stanford Child Language Conference , 15 , 17 – 29 . Carrow-Woolfolk , E. , & Allen , E. ( 2014 a). Test of auditory comprehension of language—fourth edition (TACL-4) . Austin, TX : Pro-Ed . Carrow-Woolfolk , E. , & Allen , E. ( 2014 b). Test of expressive language (TEXL) . Austin, TX : Pro-Ed . Cochrane . ( 2017 ). Assessing risk of bias in included studies. Retrieved from http://methods.cochrane.org/bias/assessing-risk-bias-included-studies Contrera , K. J. , Choi , J. S. , Blake , C. R. , Betz , J. F. , Niparko , J. K. , & Lin , F. R. ( 2014 ). Rates of long-term cochlear implant use in children . Otology & Neurotology , 35 , 426 – 430 . doi:10.1097/MAO.0000000000000243 . Google Scholar CrossRef Search ADS Convertino , C. , Borgna , G. , Marschark , M. , & Durkin , A. ( 2014 ). Word and world knowledge among deaf learners with and without cochlear implants . Journal of Deaf Studies and Deaf Education , 19 , 471 – 483 . doi:10.1093/deafed/enu024 . Google Scholar CrossRef Search ADS Davidson , L. S. , Geers , A. E. , & Nicholas , J. G. ( 2014 ). The effects of audibility and novel word learning ability on vocabulary level in children with cochlear implants . Cochlear Implants International , 15 , 211 – 221 . doi:10.1179/1754762813Y.0000000051 . Google Scholar CrossRef Search ADS de Villers-Sidani , E. , Chang , E. F. , Bao , S. , & Merzenich , M. M. ( 2007 ). Critical period window for spectral tuning defined in the primary auditory cortex (A1) in the rat . Journal of Neuroscience , 27 , 180 – 189 . doi:10.1523/JNEUROSCI.3227-06.2007 . Google Scholar CrossRef Search ADS Doehring , D. G. , & Ling , D. ( 1971 ). Programmed instruction of hearing-impaired children in the auditory discrimination of vowels . Journal of Speech, Language, and Hearing Research , 14 , 746 – 754 . doi:10.1044/jshr.1404.746 . Google Scholar CrossRef Search ADS Dunn , L. M. , & Dunn , D. M. ( 2007 ). Peabody picture vocabulary test, fourth edition (PPVT-4) . Minneapolis, MN : Pearson Assessments . Ehrler , D. , & McGhee , R. ( 2008 ). Primary test of nonverbal intelligence (PTONI) . Austin, TX : Pro-Ed . Erber , N. P. ( 1969 ). Interaction of audition and vision in the recognition of oral speech stimuli . Journal of Speech, Language, and Hearing Research , 12 , 423 – 425 . doi:10.1044/jshr.1202.423 . Google Scholar CrossRef Search ADS Erber , N. P. ( 1972 ). Auditory, visual, and auditory-visual recognition of consonants by children with normal and impaired hearing . Journal of Speech, Language, and Hearing Research , 15 , 413 – 422 . doi:10.1044/jshr.1502.413 . Google Scholar CrossRef Search ADS Erber , N. P. ( 1979 ). Speech perception by profoundly hearing-impaired children . Journal of Speech and Hearing Disorders , 44 , 255 – 270 . doi:10.1044/jshd.4403.255 . Google Scholar CrossRef Search ADS Estabrooks , W. ( 1998 ). Cochlear implants for kids . Washington, DC : Alex Graham Bell Association for Deaf . Estabrooks , W. (Ed.) ( 2001 ). 50 frequently asked questions about auditory-verbal therapy . Toronto, Ontario : Learning to Listen Foundation . Ewing , I. R. , Ewing , A. W. G. , & Cockersole , F. W. ( 1938 ). The handicap of deafness . British Journal of Educational Psychology , 8 , 307 – 312 . doi:10.1111/j.2044-8279.1938.tb03134.x . Google Scholar CrossRef Search ADS Fu , Q. J. , & Galvin , J. J. ( 2007 ). Perceptual learning and auditory training in cochlear implant recipients . Trends in Amplification , 11 , 193 – 205 . doi:10.1177/1084713807301379 . Google Scholar CrossRef Search ADS Fudala , J. B. ( 2000 ). Arizona articulation proficiency scale—third revision (Arizona–3) . Los Angeles, CA : Western Psychological Services . Gast , D. L. ( 2014 ). General factors in measurement and evaluation. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: Applications in special education and behavioral sciences ( 2nd ed. , pp. 85 – 104 ). New York, NY : Routledge . Gast , D. L. , & Spriggs , A. D. ( 2014 ). Visual analysis of graphic data. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: Applications in special education and behavioral sciences ( 2nd ed. , pp. 176 – 210 ). New York, NY : Routledge . Geers , A. E. ( 2002 ). Factors affecting the development of speech, language, and literacy in children with early cochlear implantation . Language, Speech, and Hearing Services in Schools , 33 , 172 – 183 . doi:10.1044/0161-1461(2002/015) . Google Scholar CrossRef Search ADS Geers , A. E. , Mitchell , C. M. , Warner-Czyz , A. , Wang , N.-Y. , Eisenberg , L. S. , & Team , C. I. ( 2017 ). Early sign language exposure and cochlear implantation benefits . Pediatrics , 140 , e20163489 . doi:10.1542/peds.2016-3489 . Google Scholar CrossRef Search ADS Geers , A. , Brenner , C. , & Davidson , L. ( 2003 ). Factors associated with development of speech perception skills in children implanted by age five . Ear and Hearing , 24 , 24S – 35S . doi:10.1097/01.AUD.0000051687.99218.0F . Google Scholar CrossRef Search ADS Ghazanfar , A. A. , & Schroeder , C. E. ( 2006 ). Is neocortex essentially multisensory? Trends in Cognitive Sciences , 10 , 278 – 285 . doi:10.1016/j.tics.2006.04.008 . Google Scholar CrossRef Search ADS Gilley , P. M. , Sharma , A. , Mitchell , T. V. , & Dorman , M. F. ( 2010 ). The influence of a sensitive period for auditory-visual integration in children with cochlear implants . Restorative Neurology and Neuroscience , 28 , 207 – 218 . doi:10.3233/RNN-2010-0525 . Glick , H. , & Sharma , A. ( 2017 ). Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications . Hearing Research , 343 , 191 – 201 . doi:10.1016/j.heares.2016.08.012 . Google Scholar CrossRef Search ADS Hillock , A. R. ( 2010 ). Developmental changes in the temporal window of auditory and visual integration (Order No. 3525505). Available from Dissertations & Theses @ Vanderbilt University; ProQuest Central; ProQuest Dissertations & Theses Global. (1037962194). Retrieved from http://login.proxy.library.vanderbilt.edu/login?url=https://search.proquest.com/docview/1037962194?accountid=14816 Holt , R. F. , Kirk , K. I. , & Hay-McCutcheon , M. ( 2011 ). Assessing multimodal spoken word-in-sentence recognition in children with normal hearing and children with cochlear implants . Journal of Speech, Language, and Hearing Research , 54 , 632 – 657 . doi:10.1044/1092-4388(2010/09-0148) . Google Scholar CrossRef Search ADS Horner , R. H. , Carr , E. G. , Halle , J. , McGee , G. , Odom , S. , & Wolery , M. ( 2005 ). The use of single-subject research to identify evidence-based practice in special education . Exceptional Children , 71 , 165 – 179 . doi:10.1177/001440290507100203 . Google Scholar CrossRef Search ADS Houston , D. M. , Beer , J. , Bergeson , T. R. , Chin , S. B. , Pisoni , D. B. , & Miyamoto , R. T. ( 2012 ). The ear is connected to the brain: Some new directions in the study of children with cochlear implants at Indiana University . Journal of the American Academy of Audiology , 23 , 446 – 463 . doi:10.3766/jaaa.23.6.7 . Houston , D. M. , Carter , A. K. , Pisoni , D. B. , Kirk , K. I. , & Ying , E. A. ( 2005 ). Word learning in children following cochlear implantation . Volta Review , 105 , 41 – 72 . Houston , D. M. , Stewart , J. , Moberly , A. , Hollich , G. , & Miyamoto , R. T. ( 2012 ). Word learning in deaf children with cochlear implants: Effects of early auditory experience . Developmental Science , 15 , 448 – 461 . doi:10.1111/j.1467-7687.2012.01140.x . Google Scholar CrossRef Search ADS Johnson , C. , & Goswami , U. ( 2010 ). Phonological awareness, vocabulary, and reading in deaf children with cochlear implants . Journal of Speech, Language, and Hearing Research , 53 , 237 – 261 . doi:10.1044/1092-4388(2009/08-0139) . Google Scholar CrossRef Search ADS Kaufman , A. S. , & Kaufman , N. L. ( 2004 ). Kaufman brief intelligence test, second edition . Bloomington, MN : Pearson, Inc . Kirk , K. I. , Hay-McCutcheon , M. J. , Holt , R. F. , Gao , S. , Qi , R. , & Gehrlein , B. L. ( 2007 ). Audiovisual spoken word recognition by children with cochlear implants . Audiological Medicine , 5 , 250 – 261 . doi:10.1080/16513860701673892 . Google Scholar CrossRef Search ADS Kirk , K. I. , Pisoni , D. B. , & Lachs , L. ( 2002 ). Audiovisual integration of speech by children and adults with cochlear implants. Proceedings of the 7th International Conference on Spoken Language Processing, 1689–1692. Kral , A. , Tillein , J. , Heid , S. , Hartmann , R. , & Klinke , R. ( 2005 ). Postnatal cortical development in congenital auditory deprivation . Cerebral Cortex , 15 , 552 – 562 . Google Scholar CrossRef Search ADS Kral , A. , Tillein , J. , Heid , S. , Klinke , R. , & Hartmann , R. ( 2006 ). Cochlear implants: Cortical plasticity in congenital deprivation . Progress in Brain Research , 157 , 283 – 402 . doi:10.1093/cercor/bhh156 . Google Scholar CrossRef Search ADS Lachs , L. , Pisoni , D. B. , & Kirk , K. I. ( 2001 ). Use of audiovisual information in speech perception by prelingually deaf children with cochlear implants: A first report . Ear and Hearing , 22 , 236 – 251 . doi:10.1097/00003446-200106000-00007 . Google Scholar CrossRef Search ADS Lederberg , A. R. , Prezbindowski , A. K. , & Spencer , P. E. ( 2000 ). Word-learning skills of deaf preschoolers: The development of novel mapping and rapid word-learning strategies . Child Development , 71 , 1571 – 1585 . doi:10.1111/1467-8624.00249 . Google Scholar CrossRef Search ADS Lederberg , A. R. , & Spencer , P. E. ( 2001 ). Vocabulary development of deaf and hard of hearing children. In Clark M. D. , Marschark M. , & Karchmer M. (Eds.) , Context, cognition, and deafness (pp. 88 – 112 ). Washington, DC : Gallaudet University Press . Lederberg , A. R. , & Spencer , P. E. ( 2009 ). Word learning abilities in deaf and hard-of-hearing preschoolers: Effect of lexicon size and language modality . Journal of Deaf Studies and Deaf Education , 14 , 44 – 62 . doi:10.1093/deafed/enn021 . Google Scholar CrossRef Search ADS Ledford , J. R. , Barton , E. E. , Hardy , J. K. , Elam , K. , Seabolt , J. , Shanks , M. ,… Kaiser , A. ( 2016 ). What equivocal data from single case comparison studies reveal about evidence-based practices in early childhood special education . Journal of Early Intervention , 38 , 79 – 91 . doi:10.1177/1053815116648000 . Google Scholar CrossRef Search ADS Ledford , J. R. , & Gast , D. L. (Eds.) ( 2018 ). Single case research methodology: Applications in special education and behavioral sciences ( 3rd ed. ). New York, NY : Routledge . Leonard , L. B. , Schwartz , R. G. , Chapman , K. , Rowan , L. E. , Prelock , P. A. , Terrell , B. ,… Messick , C. ( 1982 ). Early lexical acquisition in children with specific language impairment . Journal of Speech, Language, and Hearing Research , 25 , 554 – 564 . doi:10.1044/jshr.2504.554 . Google Scholar CrossRef Search ADS Lewkowicz , D. J. ( 2010 ). Infant perception of audio-visual speech synchrony . Developmental Psychology , 46 , 66 – 77 . doi:10.1037/a0015579 . Google Scholar CrossRef Search ADS Ling , A. H. ( 1976 ). Training of auditory memory in hearing-impaired children: Some problems of generalization . Ear and Hearing , 1 , 150 – 157 . Ling , D. ( 1989 ). Foundations of spoken language for hearing impaired children . Washington, DC : Alexander Graham Bell Association for the Deaf . Luckner , J. L. , & Cooke , C. ( 2010 ). A summary of the vocabulary research with students who are deaf or hard of hearing . American Annals of the Deaf , 155 , 38 – 67 . Google Scholar CrossRef Search ADS Lund , E. ( 2016 ). Vocabulary knowledge of children with cochlear implants: A meta-analysis . Journal of Deaf Studies and Deaf Education , 21 , 107 – 121 . doi:10.1093/deafed/env060 . Google Scholar CrossRef Search ADS Lund , E. , & Dinsmoor , J. ( 2016 ). Taxonomic knowledge of children with and without cochlear implants . Language, Speech, and Hearing Services in Schools , 47 , 236 – 245 . doi:10.1044/2016_LSHSS-15-0032 . Google Scholar CrossRef Search ADS Lund , E. , & Douglas , W. M. ( 2016 ). Teaching vocabulary to preschool children with hearing loss . Exceptional Children , 83 , 26 – 41 . doi:10.1177/0014402916651848 . Google Scholar CrossRef Search ADS Lund , E. , Douglas , W. M. , & Schuele , C. M. ( 2015 ). Semantic richness and word learning in children with hearing loss who are developing spoken language: A single case design study . Deafness & Education International , 17 , 163 – 175 . doi:10.1179/155 7069×15Y.0000000004 . Google Scholar CrossRef Search ADS Lund , E. , & Schuele , C. M. ( 2014 ). Effects of a word-learning training on children with cochlear implants . Journal of Deaf Studies and Deaf Education , 19 , 68 – 84 . doi:10.1093/deafed/ent036 . Google Scholar CrossRef Search ADS Martin , N. A. , & Brownell , R. ( 2005 ). Test of auditory processing skills—third edition (TAPS-3) . Austin, TX : Pro-Ed . Marulis , L. M. , & Neuman , S. B. ( 2010 ). The effects of vocabulary intervention on young children’s word learning a meta-analysis . Review of Educational Research , 80 , 300 – 335 . doi:10.3102/0034654310377087 . Google Scholar CrossRef Search ADS Massaro , D. W. ( 1984 ). Children’s perception of visual and auditory speech . Child Development , 55 , 1777 – 1788 . Google Scholar CrossRef Search ADS Massaro , D. W. , & Light , J. ( 2004 ). Improving the vocabulary of children with hearing loss . Volta Review , 104 , 141 – 174 . Massaro , D. W. , Thompson , L. A. , Barron , B. , & Laren , E. ( 1986 ). Developmental changes in visual and auditory contributions to speech perception . Journal of Experimental Child Psychology , 41 , 93 – 113 . doi:10.1016/0022-0965(86)90053-6 . Google Scholar CrossRef Search ADS McDaniel , J. , & Camarata , S. ( 2017 ). Does access to visual input inhibit auditory development for children with cochlear implants? A review of the evidence . Perspectives of the ASHA Special Interest Groups , 2 , 10 – 24 . doi:10.1044/persp2.SIG9.10 . Google Scholar CrossRef Search ADS McGurk , H. , & MacDonald , J. ( 1976 ). Hearing lips and seeing voices . Nature , 264 , 746 – 748 . doi:10.1038/264746a0 . Google Scholar CrossRef Search ADS Murray , M. M. , Lewkowicz , D. J. , Amedi , A. , & Wallace , M. T. ( 2016 a). Multisensory processes: A balancing act across the lifespan . Trends in Neurosciences , 39 , 567 – 579 . doi:10.1016/j.tins.2016.05.003 . Google Scholar CrossRef Search ADS Murray , M. M. , Thelen , A. , Thut , G. , Romei , V. , Martuzzi , R. , & Matusz , P. J. ( 2016 b). The multisensory function of the human primary visual cortex . Neuropsychologia , 83 , 161 – 169 . doi:10.1016/j.neuropsychologia.2015.08.011 . Google Scholar CrossRef Search ADS Noreña , A. J. , & Eggermont , J. J. ( 2005 ). Enriched acoustic environment after noise trauma reduces hearing loss and prevents cortical map reorganization . Journal of Neuroscience , 25 , 699 – 705 . doi:10.1523/JNEUROSCI.2226-04.2005 . Google Scholar CrossRef Search ADS Ohde , R. N. , & Sharf , D. J. ( 1992 ). Phonetic analysis of normal and abnormal speech . Columbus, OH : Charles E. Merrill . Patterson , M. L. , & Werker , J. F. ( 2003 ). Two-month-old infants match phonetic information in lips and voice . Developmental Science , 6 , 191 – 196 . doi:10.1111/1467-7687.00271 . Google Scholar CrossRef Search ADS Pilling , M. , & Thomas , S. ( 2011 ). Audiovisual cues and perceptual learning of spectrally distorted speech . Language and Speech , 54 , 487 – 497 . doi:10.1177/0023830911404958 . Google Scholar CrossRef Search ADS Pisoni , D. B. , Cleary , M. , Geers , A. E. , & Tobey , E. A. ( 1999 ). Individual differences in effectiveness of cochlear implants in children who are prelingually deaf: New process measures of performance . Volta Review , 101 , 111 – 164 . Pollack , D. ( 1970 ). Educational audiology for the limited hearing child . Springfield, IL : Charles C. Thomas . Polley , D. B. , Steinberg , E. E. , & Merzenich , M. M. ( 2006 ). Perceptual learning directs auditory cortical map reorganization through top-down influences . Journal of Neuroscience , 26 , 4970 – 4982 . doi:10.1523/JNEUROSCI.3771-05.2006 . Google Scholar CrossRef Search ADS Qi , S. , & Mitchell , R. E. ( 2012 ). Large-scale academic achievement testing of deaf and hard-of-hearing students: Past, present, and future . Journal of Deaf Studies and Deaf Education , 17 , 1 – 18 . doi:10.1093/deafed/enr028 . Google Scholar CrossRef Search ADS Rhoades , E. A. , Estabrooks , W. , Lim , S. R. , & MacIver-Lux , K. ( 2016 ). Strategies for listening, talking, and thinking in auditory-verbal therapy. In Estabrooks , W. , MacIver , K. , & Rhoades , E. A. (Eds.) , Auditory-verbal therapy for young children with hearing loss and their families, and the practitioners who guide them (pp. 285 – 326 ). San Diego, CA : Plural Publishing . Rice , M. L. , Buhr , J. C. , & Nemeth , M. ( 1990 ). Fast mapping word-learning abilities of language-delayed preschoolers . Journal of Speech and Hearing Disorders , 55 , 33 – 42 . doi:10.1044/jshd.5501.33 . Google Scholar CrossRef Search ADS Robbins , A. M. ( 2016 ). Auditory-verbal therapy: A conversational competence approach. In Moeller M. P. , Ertmer D. J. , & Stoel-Gammon C. (Eds.) , Promoting language and literacy in children who are deaf or hard of hearing (pp. 181 – 212 ). Baltimore, MD : Paul H. Brooks Publishing Co . Robertson , V. S. , von Hapsburg , D. , & Hay , J. S. ( 2017 ). The effect of hearing loss on novel word learning in infant-and adult-directed speech . Ear and Hearing , 38 , 701 – 713 . doi:10.1097/AUD.0000000000000455 . Google Scholar CrossRef Search ADS Ross , L. A. , Saint-Amour , D. , Leavitt , V. M. , Javitt , D. C. , & Foxe , J. J. ( 2007 ). Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments . Cerebral Cortex , 17 , 1147 – 1153 . doi:10.1093/cercor/bhl024 . Google Scholar CrossRef Search ADS Rouger , J. , Fraysse , B. , Deguine , O. , & Barone , P. ( 2008 ). McGurk effects in cochlear-implanted deaf subjects . Brain Research , 1188 , 87 – 99 . doi:10.1016/j.brainres.2007.10.049 . Google Scholar CrossRef Search ADS Sacks , C. , Shay , S. , Repplinger , L. , Leffel , K. R. , Sapolich , S. G. , Suskind , E. , & Suskind , D. ( 2013 ). Pilot testing of a parent-directed intervention (Project ASPIRE) for underserved children who are deaf or hard of hearing . Child Language Teaching and Therapy , 30 , 91 – 102 . doi:10.1177/0265659013494873 . Google Scholar CrossRef Search ADS Schorr , E. A. , Fox , N. A. , van Wassenhove , V. , & Knudsen , E. I. ( 2005 ). Auditory-visual fusion in speech perception in children with cochlear implants . Proceedings of the National Academy of Sciences of the United States of America , 102 , 18748 – 18750 . doi:10.1073/pnas.0508862102 . Google Scholar CrossRef Search ADS Schwartz , R. G. , & Leonard , L. B. ( 1984 ). Words, objects, and actions in early lexical acquisition . Journal of Speech, Language, and Hearing Research , 27 , 119 – 127 . doi:10.1044/jshr.2701.119 . Google Scholar CrossRef Search ADS Seitz , A. R. , Kim , R. , & Shams , L. ( 2006 ). Sound facilitates visual learning . Current Biology , 16 , 1422 – 1427 . doi:10.1016/j.cub.2006.05.048 . Google Scholar CrossRef Search ADS Sharma , A. , Campbell , J. , & Cardon , G. ( 2015 ). Developmental and cross-modal plasticity in deafness: Evidence from the P1 and N1 event related potentials in cochlear implanted children . International Journal of Psychophysiology , 95 , 135 – 144 . doi:10.1016/j.ijpsycho.2014.04.007 . Google Scholar CrossRef Search ADS Sharma , A. , Gilley , P. M. , Dorman , M. F. , & Baldwin , R. ( 2007 ). Deprivation-induced cortical reorganization in children with cochlear implants . International Journal of Audiology , 46 , 494 – 499 . doi:10.1080/14992020701524836 . Google Scholar CrossRef Search ADS Sharma , A. , & Glick , H. ( 2016 ). Cross-modal re-organization in clinical populations with hearing loss . Brain Sciences , 6 , 4 . doi:10.3390/brainsci6010004 . Google Scholar CrossRef Search ADS Sheffert , S. M. , Lachs , L. , & Hernandez , L. R. ( 1996 ). The Hoosier audiovisual multitalker database. In Pisoni , D. B. (Ed.) , Research on Spoken Language Processing Progress Report No. 21 (pp. 578 – 583 ). Bloomington, IN : Indiana University . Sindelar , P. T. , Rosenberg , M. S. , & Wilson , R. J. ( 1985 ). An adapted alternating treatments design for instructional research . Education and Treatment of Children , 8 , 67 – 76 . Stacey , P. C. , Raine , C. H. , O’Donoghue , G. M. , Tapper , L. , Twomey , T. , & Summerfield , A. Q. ( 2010 ). Effectiveness of computer-based auditory training for adult users of cochlear implants . International Journal of Audiology , 49 , 347 – 356 . doi:10.3109/14992020903397838 . Google Scholar CrossRef Search ADS Stanford , T. R. , & Stein , B. E. ( 2007 ). Superadditivity in multisensory integration: Putting the computation in context . Neuroreport , 18 , 787 – 792 . doi:10.1097/WNR.0b013e3280c1e315 . Google Scholar CrossRef Search ADS Stevenson , R. A. , Nelms , C. E. , Baum , S. H. , Zurkovsky , L. , Barense , M. D. , Newhouse , P. A. , & Wallace , M. T. ( 2015 ). Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition . Neurobiology of Aging , 36 , 283 – 291 . doi:10.1016/j.neurobiolaging.2014.08.003 . Google Scholar CrossRef Search ADS Stevenson , R. A. , Sheffield , S. W. , Butera , I. M. , Gifford , R. H. , & Wallace , M. T. ( 2017 ). Multisensory Integration in cochlear implant recipients . Ear and Hearing , 38 , 521 – 538 . doi:10.1097/AUD.0000000000000435 . Google Scholar CrossRef Search ADS Stevenson , R. A. , Zemtsov , R. K. , & Wallace , M. T. ( 2012 ). Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions . Journal of Experimental Psychology: Human Perception and Performance , 38 , 1517 – 1529 . doi:10.1037/a0027339 . Google Scholar CrossRef Search ADS Stoel-Gammon , C. ( 2011 ). Relationships between lexical and phonological development in young children . Journal of Child Language , 38 , 1 – 34 . doi:10.1017/S0305000910000425 . Google Scholar CrossRef Search ADS Storkel , H. L. , & Hoover , J. R. ( 2010 ). An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English . Behavior Research Methods , 42 , 497 – 506 . doi:10.3758/BRM.42.2.497 . Google Scholar CrossRef Search ADS Sumby , W. H. , & Pollack , I. ( 1954 ). Visual contribution to speech intelligibility in noise . The Journal of the Acoustical Society of America , 26 , 212 – 215 . doi:10.1121/1.1907309 . Google Scholar CrossRef Search ADS Tomblin , J. B. , Harrison , M. , Ambrose , S. E. , Walker , E. A. , Oleson , J. J. , & Moeller , M. P. ( 2015 ). Language outcomes in young children with mild to severe hearing loss . Ear and Hearing , 36 , 76S – 91S . doi:10.1097/aud.0000000000000219 . Google Scholar CrossRef Search ADS Tye-Murray , N. ( 2014 ). Foundations of aural rehabilitation: Children, adults, and their family members ( 4th ed. ). Samford, CT : Cengage Learning . Walker , E. A. , & McGregor , K. K. ( 2013 ). Word learning processes in children with cochlear implants . Journal of Speech, Language, and Hearing Research , 56 , 375 – 387 . doi:10.1044/1092-4388(2012/11-0343) . Google Scholar CrossRef Search ADS Wallace , M. T. , Perrault , T. J. , Hairston , W. D. , & Stein , B. E. ( 2004 ). Visual experience is necessary for the development of multisensory integration . Journal of Neuroscience , 24 , 9580 – 9584 . doi:10.1152/jn.00497.2006 . Google Scholar CrossRef Search ADS Wallace , M. T. , & Stein , B. E. ( 2007 ). Early experience determines how the senses will interact . Journal of Neurophysiology , 97 , 921 – 926 . doi:10.1152/jn.00497.2006 . Google Scholar CrossRef Search ADS Wechsler-Kashi , D. , Schwartz , R. G. , & Cleary , M. ( 2014 ). Picture naming and verbal fluency in children with cochlear implants . Journal of Speech, Language, and Hearing Research , 57 , 1870 – 1882 . doi:10.1044/2014_JSLHR-L-13-0321 . Google Scholar CrossRef Search ADS Wendel , E. , Cawthon , S. W. , Ge , J. J. , & Beretvas , S. N. ( 2015 ). Alignment of single-case design (SCD) research with individuals who are deaf or hard of hearing with the What Works Clearinghouse standards for SCD research . Journal of Deaf Studies and Deaf Education , 20 , 103 – 114 . doi:10.1093/deafed/enu049 . Google Scholar CrossRef Search ADS What Works Clearinghouse ( 2014 , March). Procedures and standards handbook (version 3.0). Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_procedures_v3_0_standards_handbook.pdf Williams , K. T. ( 2007 ). Expressive vocabulary test, second edition (EVT-2) . Minneapolis, MN : Pearson Assessments . Willstedt-Svensson , U. , Löfqvist , A. , Almqvist , B. , & Sahlén , B. ( 2004 ). Is age at implant the only factor that counts? The influence of working memory on lexical and grammatical development in children with cochlear implants . International Journal of Audiology , 43 , 506 – 515 . doi:10.1080/14992020400050065 . Google Scholar CrossRef Search ADS Wolery , M. , Gast , D. L. , & Ledford , J. R. ( 2014 ). Comparison designs. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: Applications in special education and behavioral sciences ( 2nd ed. , pp. 297 – 345 ). New York, NY : Routledge . Xu , J. , Yu , L. , Rowland , B. A. , Stanford , T. R. , & Stein , B. E. ( 2012 ). Incorporating cross-modal statistics in the development and maintenance of multisensory integration . Journal of Neuroscience , 32 , 2287 – 2298 . doi:10.1523/jneurosci.4304-11.2012 . Google Scholar CrossRef Search ADS Yu , L. , Rowland , B. A. , & Stein , B. E. ( 2010 ). Initiating the development of multisensory integration by manipulating sensory experience . Journal of Neuroscience , 30 , 4904 – 4913 . doi:10.1523/JNEUROSCI.5575-09.2010 . Google Scholar CrossRef Search ADS Zhang , L. I. , Bao , S. , & Merzenich , M. M. ( 2001 ). Persistent and specific influences of early acoustic environments on primary auditory cortex . Nature Neuroscience , 4 , 1123 – 1130 . doi:10.1038/nn745 . Google Scholar CrossRef Search ADS Zupan , B. , & Sussman , J. E. ( 2009 ). Auditory preferences of young children with and without hearing loss for meaningful auditory–visual compound stimuli . Journal of Communication Disorders , 42 , 381 – 396 . doi:10.1016/j.jcomdis.2009.04.002 . Google Scholar CrossRef Search ADS Appendix A Pseudoword sets for each participant Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Pseudoword sets for each participant Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Appendix B Example unfamiliar objects Appendix C Example word list for multimodal word recognition task with phonetic transcription 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k Example word list for multimodal word recognition task with phonetic transcription 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k Appendix D Standardized theme boxes © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Journal of Deaf Studies and Deaf Education Oxford University Press

Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss

Loading next page...
 
/lp/ou_press/comparing-auditory-only-and-audiovisual-word-learning-for-children-1V0fHwToW5
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ISSN
1081-4159
eISSN
1465-7325
D.O.I.
10.1093/deafed/eny016
Publisher site
See Article on Publisher Site

Abstract

Abstract Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed. Studying optimal methods to support communication development in children with hearing loss has a venerable history and includes debates regarding the role of auditory and visual input (e.g., Ewing, Ewing, & Cockersole, 1938). Educational approaches for children with hearing loss vary in their emphasis on unisensory (auditory-only) versus multisensory (simultaneous auditory and visual) input because of differences in underlying multisensory processing theories and lack of evidence regarding the relative value of either the unisensory or the multisensory approach over the other. Supporters of each position strive to increase the child’s ability to integrate multisensory input for “real world” communication (Pollack, 1970). The difference is “wholly in the means by which this goal is to be reached” (Pollack, 1970, p. 42). Because there is limited empirical evidence as to which approach is more effective for which children with hearing loss and under what conditions, some children may not be reaching optimal spoken language outcomes. This study directly compares the effectiveness and efficiency of a word learning intervention under an auditory-only (AO; unisensory) intervention condition that prohibits access to speechreading cues versus an audiovisual (AV; multisensory) intervention condition that permits access to speechreading cues. Specifically, this investigation was designed to inform intervention practices with the goal of maximizing spoken language outcomes in children with hearing loss. Neither condition provides access to gestures, signs, pictures, or other forms of visual input other than visual speechreading cues in the AV intervention condition. This introduction describes the evidence supporting the two diverging positions (i.e., unisensory and multisensory) on how best to teach children with hearing loss to use auditory and visual information to facilitate spoken language skills. Then, the need for direct evidence of learning differences under AO and AV intervention conditions for children with hearing loss is presented. Finally, vocabulary deficits and the effectiveness of vocabulary instruction in children with hearing loss are discussed. Unisensory Theory Supporters of a unisensory perspective suggest that providing visual access to the speaker’s lips, tongue, and throat alongside auditory information competes with and inhibits a child’s processing of auditory information, which is then reflected in reduced spoken word acquisition. Pollack (1970) summarizes the unisensory position well by stating, “There can be no compromise, because once emphasis is placed upon ‘looking’ there will be divided attention, and the unimpaired modality, vision, will be victorious” (p. 18). This position has been used to support long-standing educational practices that aim to emphasize and highlight the auditory signal by limiting or eliminating access to visual cues, such as using a hand cue (i.e., covering the mouth with a flat, slanted hand; Estabrooks, 2001), speech hoops, speaking when the child is looking away, visual distractions, and positioning (Robbins, 2016). Evidence in favor of unisensory training is drawn from studies of AO training (e.g., Doehring & Ling, 1971; Ling, 1976) and of sensory deprivation and neural plasticity (e.g., de Villers-Sidani, Chang, Bao, & Merzenich, 2007; Sharma, Campbell, & Cardon, 2015). Interest in unisensory training to isolate the auditory sense to facilitate habilitation is centuries old (Pollack, 1970). Much of the evidence for unisensory training can be traced back to the 1940s to 1970s when use of AO training emerged in the United States (Pollack, 1970). Studies reported increased word perception and improved recall of sequences of verbal information through training using audition without access to speechreading (i.e., visual) cues (e.g., Doehring & Ling, 1971; Ling, 1976). However, these studies showed limited generalization. Further, because the studies used no-treatment comparison groups instead of a comparison multisensory intervention group, they did not directly address whether multisensory input during training is advisable (Doehring & Ling, 1971; Ling, 1976). More recent resources supporting unisensory training for children with hearing loss (e.g., Estabrooks, 2001; Rhoades, Estabrooks, Lim, & MacIver-Lux, 2016; Robbins, 2016) still rely on early studies as opposed to more recent empirical evidence (see McDaniel & Camarata, 2017 for a review). Studies of sensory deprivation and neural plasticity have been used to support AO training as well. One underlying motivation of AO training is to reduce the probability that brain regions that process auditory information will be repurposed to process another sense—namely vision (e.g., Sharma et al., 2015; Zupan & Sussman, 2009) or somatosensation (i.e., sense of touch; Glick & Sharma, 2017; Sharma & Glick, 2016; Sharma et al., 2015). Evidence from animal models indicates that after a period of total auditory deprivation, the auditory cortex does indeed undergo changes that result in relatively inefficient processing of auditory information even after auditory access is subsequently provided (de Villers-Sidani et al., 2007; Kral, Tillein, Heid, Hartman, & Klinke, 2005; Kral, Tillein, Heid, Klinke, & Hartmann, 2006; Zhang, Bao, & Merzenich, 2001). When these principles are applied to children with hearing loss, the unisensory theory predicts that AO training prevents cortical reorganization in favor of vision and instead supports processing of auditory information (Noreña & Eggermont, 2005; Polley, Steinberg, & Merzenich, 2006; Sharma, Gilley, Dorman, & Baldwin, 2007). The unisensory position asserts that providing access to visual input, particularly speechreading cues, would inhibit the ability for a child with hearing loss to process AO input, including spoken words, due to an overreliance on the visual system (Pollack, 1970). Multisensory Theory Supporters of a multisensory theory suggest that simultaneous, integrated AV input facilitates word learning, even when input is unequal across senses (e.g., in hearing loss). Evidence in favor of the multisensory position is drawn from studies of the multisensory nature of speech perception in natural contexts (e.g., Holt, Kirk, & Hay-McCutcheon, 2011), spoken word recognition tasks in unisensory versus multisensory conditions (Erber, 1969, 1972; Gilley, Sharma, Mitchell, & Dorman, 2010; Sumby & Pollack, 1954), and associations between the degree of benefit from visual cues with language skills and speech intelligibility in children with hearing loss (Bergeson, Pisoni, & Davis, 2003; Kirk, Pisoni, & Lachs, 2002). First, visual cues more heavily influence spoken word recognition for individuals with typical hearing than previously thought (Holt et al., 2011). Visual input can change auditory perception as demonstrated through the McGurk effect (McGurk & MacDonald, 1976), and auditory input can influence visual perception (Seitz, Kim, & Shams, 2006). Although multisensory functions were once thought to be restricted primarily to higher level brain regions and dedicated multisensory subcortical regions feeding these higher level brain regions (e.g., cortical association areas, premotor cortices, and sensorimotor subcortical regions), more recent evidence shows multisensory functions at early processing levels including the primary visual and auditory cortices (e.g., Ghazanfar & Schroeder, 2006; Murray, Lewkowicz, Amedi, & Wallace, 2016; Murray, Thelen, et al., 2016). Additionally, recent evidence indicates that multisensory processing emerges very early in development. Infants as young as two months show detection of mismatched auditory and visual stimuli for vowels (Patterson & Werker, 2003). Infants also exhibit perception of audiovisual synchrony (Lewkowicz, 2010). Second, replicated findings indicate the benefit for AV input relative to AO and visual-only (VO) input for speech recognition tasks for children and adults with and without hearing loss, especially under noisy conditions (e.g., Bergeson & Pisoni, 2004; Bergeson, Pisoni, & Davis, 2005; Campbell, 2008; Erber, 1969, 1972, 1979; Geers, Brenner, & Davidson, 2003; Gilley et al., 2010; Holt et al., 2011; Kirk et al., 2007; Lachs, Pisoni, & Kirk, 2001; Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007; Stevenson, Sheffield, Butera, Gifford, & Wallace, 2017; Sumby & Pollack, 1954). Investigators report superadditive effects (i.e., greater performance with multisensory stimuli than the summed performance with unisensory stimuli; Stanford & Stein, 2007) in word recognition accuracy in AV conditions compared with AO and VO conditions (e.g., Gilley et al., 2010). Nonetheless, individuals with hearing loss vary in the degree to which they fuse and benefit from AV stimuli relative to unisensory input. Prominent factors include audiological history and age of cochlear implantation—two measures of auditory experience (Bergeson & Pisoni, 2004; Bergeson et al., 2005; Gilley et al., 2010; Rouger, Fraysse, Deguine, & Barone, 2008; Schorr, Fox, van Wassenhove, & Knudseon, 2005). In alignment with these findings, numerous animal models show the importance of experience for developing multisensory integration abilities (e.g., Wallace & Stein, 2007; Wallace, Perrault, Hairston, & Stein, 2004; Yu, Rowland, & Stein, 2010). For example, cats required exposure to coordinated, not random, auditory and visual input to exhibit multisensory enhancement to AV stimuli versus unisensory stimuli (Xu, Yu, Rowland, Stanford, & Stein, 2012). Finally, AV integration and enhancement have been positively correlated with speech-language outcomes including receptive vocabulary, receptive and expressive language, and speech intelligibility in children with hearing loss (Bergeson et al., 2003, 2005; Kirk et al., 2002). These correlations suggest there might be shared variance between multisensory integration and gains in speech and language skills in children with hearing loss. Further research is needed to understand and apply this correlational evidence. In sum, given the AV multisensory nature of speech perception in natural contexts, the importance of experience for multisensory integration, benefits of multisensory input for speech recognition tasks, and positive correlations between AV integration and enhancement with speech-language outcomes, one must carefully consider the possible implications of unisensory versus multisensory intervention strategies for children with hearing loss. The degree and manner in which children with hearing loss receive access to multisensory input could have important implications for speech and language outcomes. Vocabulary Deficits and Instruction Effectiveness in Children with Hearing Loss Because vocabulary skills strongly predict a variety of long-term outcomes, including reading and other academic skills, the importance of word learning and vocabulary skills cannot be overemphasized (Johnson & Goswami, 2010; Qi & Mitchell, 2012). Many children with hearing loss understand and use fewer words than their peers with typical hearing, despite substantial gains in overall spoken language outcomes (Convertino, Borgna, Marschark, & Durkin, 2014; Davidson, Geers, & Nicholas, 2014; Houston, Stewart, Moberly, Hollich, & Miyamoto, 2012; Lund, 2016; Tomblin et al., 2015). They face an uphill battle learning to understand and use age expected vocabulary words due to a history of reduced auditory input, an impoverished auditory signal, and word learning deficits relative to peers with typical hearing (Anderson, 2015; Houston, Carter, Pisoni, Kirk, & Ying, 2005; Houston, Stewart, et al., 2012; Lederberg & Spencer, 2009; Lederberg, Prezbindowski, & Spencer, 2000). Children with hearing loss exhibit variation in word learning skills and use of word learning strategies, which may be attributed to a variety of factors including vocabulary size, age of implantation, and chronological age (Houston, Stewart, et al., 2012; Lederberg & Spencer, 2001, 2009; Lederberg et al., 2000; Robertson, von Hapsburg, & Hay, 2017). Continued investigation of the specific factors influencing word learning skills in children with hearing loss is needed to inform the development and implementation of appropriate vocabulary interventions for children with hearing loss who as a group exhibit vocabulary skills below age expectations (e.g., Lund, 2016). Additional investigation as to which specific features of instruction are most effective for increasing the vocabulary skills of children with hearing loss is also required. In a systematic review of vocabulary instruction for children with hearing loss aged 3- to 21-year-old, Luckner and Cooke (2010) identified 10 studies published between 1967 and 2008 that directly assessed the effects of a specific vocabulary intervention. Luckner and Cooke (2010) called for additional research on vocabulary instruction for children with hearing loss and highlighted the need for replication of published findings. Since 2008, several more studies have been published (e.g., Lund & Douglas, 2016; Lund & Schuele, 2014; Lund, Douglas, & Schuele, 2015; Sacks et al., 2013), but the overall number of investigations remains small. Nonetheless, one theme that has emerged through several empirical articles and reviews is the benefit of explicit (i.e., direct) instruction (e.g., Lund & Douglas, 2016), which is consistent with findings from children with typical hearing (Marulis & Neuman, 2010). Need for Direct Evidence Despite decades of debate between unisensory and multisensory theoretical positions, a literature search did not yield any studies of children with hearing loss directly comparing AO versus AV intervention conditions for word learning or other language tasks. Related findings from adults with cochlear implants or cochlear implant simulations via vocoders may not apply to children with hearing loss because of differences in auditory and language experiences (Bernstein, Eberhardt, & Auer, 2014; Pilling & Thomas, 2011; Stacey et al., 2010). Further, nearly all known experiments focus on word recognition rather than word learning (Fu & Galvin, 2007; Stevenson et al., 2017). One study of word learning in 6- to 10-year-old children with hearing loss used an intervention that provided simultaneous visual and auditory input through a computer-animated tutor; however, this study did not have a comparison condition without access to visual speechreading cues (Massaro & Light, 2004). Researchers and clinicians need direct evidence of how providing visual (speechreading) stimuli may impact the effectiveness and efficiency of word learning interventions for children with hearing loss. Although studies comparing oral communication versus total communication programs and sign language exposure (e.g., Bergeson & Pisoni, 2004; Bergeson et al., 2005; Geers et al., 2017; Geers, 2002; Houston, Beer, et al., 2012; Pisoni et al., 1999) are indirectly related to the current question of AO versus AV intervention conditions for word learning, these studies do not compare spoken AO and AV instruction directly. In addition, they omit the degrees to which oral communication programs, caregivers, and interventionists emphasize unisensory training and only broadly describe the degree of exposure to sign language in home and educational settings. Perhaps even more importantly, whereas auditory and visual stimuli from spoken words offer complementary and correlated information for the same articulatory gestures (Campbell, 2008), auditory and visual stimuli from spoken words with sign language do not (Houston, Beer, et al., 2012). Thus, these two comparisons are not equivalent. Last, these correlational studies lack sufficient evidence to answer causal questions about interventions’ active ingredients. Whether AO or AV input results in more efficient word learning for children with hearing loss cannot be discerned from the current extant evidence base. Therefore, an adapted alternating treatments single case intervention research design was employed in this investigation. This design allowed for the direct, stringent evaluation of the effectiveness and efficiency of AO versus AV intervention conditions for word learning tasks. Research Questions and Hypotheses This study directly compares the effectiveness and efficiency of an explicit receptive word learning intervention for teaching young children with hearing loss to associate pseudowords (i.e., non-real or “nonsense” words such as “weem” and “moob” that conform to the rules of English phonology) with unfamiliar objects in an AO intervention condition versus an AV intervention condition. The following two research questions were used to guide the present investigation: (A) Do children with hearing loss learn pseudowords receptively under AO and under AV intervention conditions? (B) Do children with hearing loss learn pseudowords receptively more efficiently under an AO intervention condition compared with an AV intervention condition when tested in an AO presentation format? When directly comparing the relative effectiveness and efficiency of AO versus AV intervention conditions on word learning, one must use the same testing method regardless of the words’ intervention format. Otherwise, differences in results could be attributable to different testing formats. Providing one consistent testing method inherently creates a mismatch between testing versus intervention formats of presenting the word. Arguably, an AO testing format for words taught in AO as well as AV intervention conditions (a) eliminates the possibility of participants being exposed to visual input for the AO words during testing, (b) provides an appropriately conservative test of AV input benefits, and (c) directly addresses whether AV input inhibits word learning when tested in an AO format. Based on a multisensory theory, the AV intervention condition should result in more efficient word learning. In contrast, based on a unisensory theory, the AO intervention condition should result in more efficient word learning. In addition, unisensory theory posits that the AV presentation format will inhibit the ability of a child with hearing loss to comprehend words when presented without speechreading (visual) cues. This study design allows for a direct test of these competing hypotheses. Methods Experimental Design This study employed a type of single case research design (SCRD) called an adapted alternating treatments design (AATD) to evaluate the effectiveness and compare the efficiency of AO and AV intervention conditions (Sindelar, Rosenberg, & Wilson, 1985; Wolery, Gast, & Ledford, 2014). SCRD is a type of experimental design that enables investigators to draw causal conclusions about the relation between independent and dependent variables (DVs) (Horner et al., 2005). It is ideal for low-incidence populations with notable variability across individuals, such as children with hearing loss (Horner et al., 2005; Wendel, Cawthon, Ge, & Beretvas, 2015), and has been recently highlighted as directly applicable to studying intervention practices in children with hearing loss (Cannon, Guardino, Antia, & Luckner, 2016). There are a number of designs within the overall rubric of SCRD that can be adapted to address different intervention questions. Selecting an appropriate SCRD for a specific research question is a critical decision for all SCRD studies. The AATD was selected for this study because it was specifically developed to compare two or more treatments for behaviors that are expected to remain after intervention is withdrawn (i.e., non-reversible behaviors). Word learning is considered a non-reversible behavior because children are expected to retain word knowledge after they acquire it, even if they are no longer being directly taught the information. The AATD orders conditions through rapid iterative alternation and necessitates behavioral sets (in this case word sets) of equal difficulty to draw conclusions regarding the relative efficiency of one condition compared with another. Investigators can also draw conclusions about the effectiveness of a condition by measuring performance on an untrained control set throughout the AATD study. Conclusions are drawn based on whether the participant exhibits a different pattern of acquisition for the target behavior under one condition than is observed without intervention for an equivalent set (i.e., for a control set). Specific to this study, we examined whether participants exhibited greater accuracy identifying the target pseudowords in the AO and AV sets than in the control set. Participants Participants were recruited from a specialized preschool for children with hearing loss that focuses on developing listening and speaking skills. The preschool’s teachers do not use manual communication (e.g., American Sign Language). As required by the inclusion criteria, participants achieved standard scores of at least 70 for spoken receptive vocabulary skills on the Peabody Picture Vocabulary Test, Fourth Edition (PPVT-4; Dunn & Dunn, 2007) and spoken expressive vocabulary skills on the Expressive Vocabulary Test, Second Edition (EVT-2; Williams, 2007). Participants were required to be monolingual English speakers based on caregiver report to ensure that English vocabulary deficits were not due to limited English exposure and to avoid multiple phonological systems confounding the word sets’ equal difficulty. Individuals were included in the study only if they displayed at least average nonverbal cognitive ability (i.e., standard score of at least 85 on the Kaufman Brief Intelligence Test, Second Edition, Kaufman & Kaufman, 2004 or Primary Test of Nonverbal Intelligence, Ehrler & McGhee, 2008) had a negative history for uncorrected vision impairment per caregiver report and/or medical record, and no evidence of severe motor impairment. Individuals with low nonverbal cognitive levels, uncorrected vision impairment and/or severe motor impairment may not have been able to participate in the research tasks. Teachers identified students expected to meet inclusion criteria. Study packets containing a letter describing the study and consent forms were sent to perspective participants. The recruitment process continued until four participants who met the inclusion criteria were secured. Participation in the study was also dependent on the family’s ability to commit to the intervention schedule (i.e., three times per week for several months). Institutional Review Board approval to conduct the study was granted and caregivers provided written consent for participants. Four preschool children (three males, one female) aged 4 years 4 months to 5 years 2 months with permanent hearing loss completed the study (see Table 1). Three participants were reported to be white and one to be Asian. None identified as Hispanic. For maternal education level, two reported a bachelor’s degree and two reported some college. According to the participants’ medical records, possible hearing loss was detected at birth for each participant and then confirmed via follow-up testing. All of the participants consistently wore bilateral hearing technology (i.e., at least 8 hours per day per caregiver report; Contrera et al., 2014). All children at the preschool complete listening checks every morning with the educational staff and a resident educational audiologist provides troubleshooting services and backup equipment as needed. The participants varied in their degree of hearing loss and types of hearing technology devices. Participants 1, 3, and 4 were all fit with hearing technology prior by 3 months of age. Participants 1 and 4 received bilateral cochlear implants at 12 months of age. Table 1. Participant characteristics Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Note. Age is the participant’s chronological age at the first day of the pre-baseline evaluation (years;months); Age at CI is the participant’s chronological age when his cochlear implants were activated; BCHA = bone conduction hearing aid; CI = cochlear implant; EVT = Expressive Vocabulary Test—Second Edition (Williams, 2007); HA = hearing aid; NA = not applicable; NVIQ = nonverbal intelligence quotient as measured by the Kaufman Brief Intelligence Test, Second Edition (Kaufman & Kaufman, 2004) or Primary Test of Nonverbal Intelligence (Ehrler & McGhee, 2008); PPVT = Peabody Picture Vocabulary Test—Fourth Edition (Dunn & Dunn, 2007); SS = standard score. aParticipant 2 presented with a severe mixed hearing loss rising to normal hearing sensitivity at 4,000 Hz and a moderate hearing loss from 6,000–8,000 Hz in the left ear and moderately-severe to moderate conductive hearing loss in the right ear. Table 1. Participant characteristics Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Participant Age Gender Degree of hearing loss Device(s) Age at HA Age at CI PPVT SS EVT SS NVIQ 1 4;7 Male Severe to profound CIs 3 months 12 months 91 107 97 2 5;2 Female Moderate to severea BCHAs 2.5 years NA 101 112 113 3 4;4 Male Mild to moderately-severe HAs 3 months NA 109 100 113 4 4;11 Male Severe to profound CIs 2 months 12 months 89 101 107 Note. Age is the participant’s chronological age at the first day of the pre-baseline evaluation (years;months); Age at CI is the participant’s chronological age when his cochlear implants were activated; BCHA = bone conduction hearing aid; CI = cochlear implant; EVT = Expressive Vocabulary Test—Second Edition (Williams, 2007); HA = hearing aid; NA = not applicable; NVIQ = nonverbal intelligence quotient as measured by the Kaufman Brief Intelligence Test, Second Edition (Kaufman & Kaufman, 2004) or Primary Test of Nonverbal Intelligence (Ehrler & McGhee, 2008); PPVT = Peabody Picture Vocabulary Test—Fourth Edition (Dunn & Dunn, 2007); SS = standard score. aParticipant 2 presented with a severe mixed hearing loss rising to normal hearing sensitivity at 4,000 Hz and a moderate hearing loss from 6,000–8,000 Hz in the left ear and moderately-severe to moderate conductive hearing loss in the right ear. Participant 2, who was diagnosed with mixed bilateral hearing loss, bilateral microtia, right-side atresia, left-side cochlear aplasia, and left-side auditory neuropathy and was adopted internationally, received amplification at 2.5 years of age. Both social and medical histories contributed to her age of amplification. Limited information is available regarding audiological and medical history prior to her adoption. The most recent audiological evaluation report in Participant 2’s medical record indicated a severe mixed hearing loss rising to normal hearing sensitivity at 4,000 Hz and a moderate hearing loss from 6,000 to 8,000 Hz in the left ear and a moderately-severe to moderate conductive hearing loss in the right ear. Aside from hearing loss, none of the participants were reported to have other disabilities or medical conditions that were expected to impact learning or adaptive skills. Rationale for Instructional Approach and Teaching Pseudowords Due to significantly higher gains reported for studies implementing only explicit rather than only implicit (i.e., incidental) vocabulary interventions, we implemented an explicit word learning intervention (Lund & Douglas, 2016; Marulis & Neuman, 2010). Relatedly, we taught nouns because there is evidence that children with and without language impairment learn nouns more efficiently than action verbs receptively in experimental contexts (Camarata & Leonard, 1985; Leonard et al., 1982; Rice, Buhr, & Nemeth, 1990; Schwartz & Leonard, 1984). We chose to use pseudowords to control the number of exposures participants received. Because we created the pseudowords specifically for this study, it is highly unlikely that the participants would have heard these pseudowords before the study or during the study from anyone other than the examiner. In addition, use of pseudowords enabled superior confidence that word sets were equivalent and independent, which were necessary for the research design (i.e., AATD). Word Set Development Because the AATD necessitates word sets of equal difficulty, three independent pseudoword sets of equal difficulty that were counterbalanced across AV, AO, and control pseudoword sets across participants were carefully developed. These pseudoword sets were developed to be taught within the intervention and to be assessed during the study probes as the outcome measure. Each set included four consonant-vowel-consonant (CVC) pseudowords that are phonotactically legal in English phonology. Word sets were balanced for phoneme audibility, phoneme visibility, and phonological neighborhood density (see Appendix A for complete sets; Stoel-Gammon, 2011; Storkel & Hoover, 2010). Phoneme visibility was categorized by place of articulation and degree of lip movement (Bernthal, Bankson, & Flipsen, 2009). Phoneme audibility was categorized by relative amplitude and acoustic frequency. Relatively low amplitude, higher frequency sounds were classified as low audibility (Ohde & Sharf, 1992). Because vowels demonstrate relatively high audibility, they were distributed by visibility. As shown in Appendix A, each word set contains one high visibility–high audibility, one low visibility–high audibility, one high visibility–low audibility, and one low visibility–low audibility word. The first author randomly assigned unfamiliar object referents to each pseudoword for each participant and confirmed that the participants did not have a “name” for any of the objects prior to intervention. All objects named by a participant during a screening probe were not used for that participant. See Appendix B for example objects. Procedures The first author was the examiner for assessments and intervention. She is a certified, licensed speech-language pathologist who was unfamiliar to the participants prior to the study. The examiner recorded the room’s noise level using the SPLnFTT Noise Meter app on an iPhone 4 S before each session. This system was calibrated with a dosimeter. The mean sound levels for the two rooms used across all sessions were 42.7 dB(A) and 40.0 dB(A), which provides evidence that the sessions were completed in quiet rooms with limited background noise. All sessions were video-recorded using a Sony Professional camcorder for data collection purposes and to monitor procedural fidelity. The examiner used a speech hoop (10.5” diameter) to prohibit access to speechreading cues without distorting the acoustic signal during listening checks, the DV probes, and the AO intervention condition. Pre-baseline evaluation Participants completed assessments of language, speech production, auditory discrimination, and audiovisual integration for descriptive purposes during the pre-baseline evaluation (i.e., preliminary testing). This descriptive information provides guidance on to whom the study results may apply (i.e., external validity) by enabling readers to compare individual children with the participants of the current study (see Table 2 for results). The pre-baseline evaluation included two measures of audiovisual integration—mulitmodal word recognition and McGurk tasks. In the multimodal word recognition task, participants repeated 24 recorded tri-phonemic, monosyllabic nouns from the Hoosier Audiovisual Multi-talker Database with a 0 dB signal-to-noise ratio in AV, AO, and VO conditions to yield an AV gain score (Sheffert, Lachs, & Hernandez, 1996; Stevenson et al., 2015). A total of five lists, each containing 24 real words, were used across the participants. Words lists were used across multiple conditions across participants. See Appendix C for an example word list. Stimuli were presented using MATLAB 2012b software. All participants completed the multimodal word recognition task in all conditions. As shown in Table 2, participants exhibited a wide range of benefit from AV input relative to unisensory input in the word recognition task (range: 8–50%). Table 2. Descriptive information for participants Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Note. Arizona-3 = Arizona Articulation Proficiency Scale—Third Revision (Fudala, 2000); AV Gain = gain in percent accuracy identifying words in the audiovisual (AV) condition relative to the child’s predicted AV percent accuracy based on his or her unisensory performance calculated as [AV accuracy] – [(auditory-only accuracy + visual-only accuracy) – (auditory-only accuracy × visual-only accuracy)] based on Stevenson et al. (2015); BCHA = bone conduction hearing aid; CI = cochlear implant; DNC = did not complete; HA = hearing aid; McGurk = percentage of perceived McGurk illusions relative to unisensory performance (Stevenson, Zemtsov, & Wallace, 2012); SS = standard score; TACL = Test of Auditory Comprehension of Language (Carrow-Woolfolk & Allen, 2014a); TAPS-3 WD = Test of Auditory Processing Skills—Third Edition Word Discrimination subtest (scaled score; Martin & Brownell, 2005); TEXL = Test of Expressive Language (Carrow-Woolfolk & Allen, 2014b). Table 2. Descriptive information for participants Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Language Auditory discrimination Speech production Multisensory processing TACL SS TEXL SS TAPS-3 WD Arizona-3 SS AV Gain McGurk 1 92 94 10 96 32% .04 2 88 84 7 90 8% .00 3 108 98 10 90 21% DNC 4 98 86 6 93 50% DNC Note. Arizona-3 = Arizona Articulation Proficiency Scale—Third Revision (Fudala, 2000); AV Gain = gain in percent accuracy identifying words in the audiovisual (AV) condition relative to the child’s predicted AV percent accuracy based on his or her unisensory performance calculated as [AV accuracy] – [(auditory-only accuracy + visual-only accuracy) – (auditory-only accuracy × visual-only accuracy)] based on Stevenson et al. (2015); BCHA = bone conduction hearing aid; CI = cochlear implant; DNC = did not complete; HA = hearing aid; McGurk = percentage of perceived McGurk illusions relative to unisensory performance (Stevenson, Zemtsov, & Wallace, 2012); SS = standard score; TACL = Test of Auditory Comprehension of Language (Carrow-Woolfolk & Allen, 2014a); TAPS-3 WD = Test of Auditory Processing Skills—Third Edition Word Discrimination subtest (scaled score; Martin & Brownell, 2005); TEXL = Test of Expressive Language (Carrow-Woolfolk & Allen, 2014b). For the McGurk task, two participants demonstrated sufficiently accurate and reliable responses during the practice trials and unisensory conditions (i.e., AO and VO) to complete the task. For the multisensory trials, participants were shown dubbed videos of a woman saying, “Ba,” or “Ga,” with congruent or incongruent AV stimuli (McGurk & MacDonald, 1976). The McGurk incongruent (illusion) trials presented a video of a woman saying, “Ga,” with an audio recording of “Ba.” Across all conditions, participants reported the syllable they perceived by pressing a computer key (e.g., “B” for “Ba;” Baum, Stevenson, & Wallace, 2015). The strength of the McGurk effect was calculated as the percentage of perceived illusions relative to unisensory baseline performance (p(AV McGurk) × [1−p(Unisensory/dɑ/); Stevenson, Zemtsov, & Wallace, 2012). Because no participants reported perceiving “da” in the unisensory condition, the results are equivalent to those that do not take into account unisensory performance. Participant 1 pressed “D” on one of the 20 multisensory incongruent trials resulting in a score of .04 (scale 0 to 1), but he spontaneously and immediately stated that he made a mistake. Participant 2 did not report perceiving the McGurk illusion. Because age is a known factor in the likelihood of individuals perceiving the McGurk illusion, age may at least partially explain the participants’ performance (Hillock, 2010; Massaro, 1984; Massaro, Thompson, Barron, & Laren, 1986; McGurk & MacDonald, 1976). Older children are more likely to perceive the illusion than younger children (Hillock, 2010). Nonetheless, some preschool children have been reported to perceive the McGurk effect (e.g., McGurk & MacDonald, 1976). DV probe All probes were administered without visual cues because one must use the same testing format regardless of the pseudowords’ intervention condition when comparing performance across intervention conditions. During DV probes, participants selected unfamiliar objects corresponding to the target pseudowords within a given set (e.g., AV, AO, or control). Objects in the DV probe for a given set included unfamiliar objects assigned to the four target pseudowords, a known object (e.g., ball), and two foils (i.e., unfamiliar objects never taught). The DV is the percent accuracy (i.e., correct responses/total trials × 100) for identifying target words receptively within 5 s in a field of seven objects in an AO probe (i.e., examiner uses a speech hoop). The examiner sat across the table from the participant and placed the seven objects in the randomly drawn order on the evenly spaced stickers across the table. The examiner covered her mouth with a speech hoop throughout the probe. She said, “Give me the (target word),” with an upturned palm for each target pseudoword in a predetermined random order. The examiner did not provide repetitions of target pseudowords, even if the participant requested one, or any feedback regarding response accuracy, except for the one known object per set. To maintain engagement and effort, the examiner was permitted to provide positive reinforcement for identifying the known object for each group and for behaviors not targeted in the intervention (e.g., remaining seated and waiting quietly for the next direction; Wolery et al., 2014). Note that the probe was receptive only; participants were not asked to label objects. All data were entered twice into an electronic spreadsheet and initial data entry errors were corrected to ensure data entry accuracy. Further details about the DV probe are available in the study’s procedure manual, which is available from the first author upon request. DV probes were approximately 2–10 min depending on whether the control set was assessed that session and the participant’s response speed. AO and AV sets were administered each session. The control set was administered every two to three sessions to provide sufficient data for interpreting the results without conducting excessive assessments that might reduce the saliency of the learning task. Baseline phase The baseline phase occurred prior to teaching the participants any of the target pseudowords. Baseline sessions occurred three times per week until the participant demonstrated a stable pattern of response (i.e., low variability in accuracy level with either a decreasing or flat trend), with a minimum of three sessions (Wolery et al., 2014). In baseline sessions, participants completed a listening check (i.e., participant repeated the Ling 6 sounds produced by the examiner to assess whether his or her hearing devices were working properly; Ling, 1989; Tye-Murray, 2014), followed by the DV probe. Because the targets were constructed specifically for this study, any correct responses prior to intervention must have arisen from chance. Comparison phase The receptive word learning intervention was provided only during the comparison phase. Comparison phase sessions occurred three times per week and began with a listening check, followed by the DV probe, and then the AO and AV intervention conditions. The order of the AO and AV intervention conditions was alternated across sessions and counterbalanced across participants. In each session, the second condition occurred immediately after the first. Participants were taught to associate pseudowords with unfamiliar objects using a procedure adapted from Leonard et al. (1982). The independent variable that differentiated the AO and AV intervention conditions was the presence or absence of access to speechreading cues (i.e., visual stimuli). All other potentially influential variables were held constant. The examiner never taught any of the pseudowords in the control set. The control set pseudowords were only included during probes at the scheduled times. For the intervention, the examiner introduced each target object one at a time, while engaging with the participant with the selected theme box (see Appendix D). These theme boxes were used to facilitate active manipulation of the unfamiliar objects, maintain the participants’ interest, and more closely approximate how children interact with unfamiliar items in therapeutic and community settings. The theme box items were not used as unfamiliar target objects. Instead, the examiner introduced unfamiliar objects one at a time while the participant engaged with the theme box. The same theme box was used during the AO and AV intervention conditions for the session to maintain consistency of the intervention across conditions. The examiner completed three key intervention components for each of the four target objects: (a) eight labeling exposures, (b) two elicited labels, and (c) two opportunities to identify target objects. For each labeling exposure, the examiner secured the participant’s eye contact, named the unfamiliar object in the sentence final position (e.g., “Here’s the ____.”), and produced a show gesture with the target object toward the participant. At least one utterance or play action by the examiner or participant occurred between each exposure. To maintain uniformity across conditions, the examiner used one hand for the show gesture because she continuously held the speech hoop with one hand in the AO intervention condition. For the elicited labels, the examiner asked the participants to say each target pseudoword two times per session (e.g., “What’s this?” while holding the target object). Participants were not required to say each target pseudoword. If the participant declined to say the target pseudoword, the examiner proceeded with the intervention procedure. The examiner repeated these steps for all four target objects in the set in a predetermined, random order. At the end of each intervention condition, the examiner provided two opportunities for the participant to identify each target object. She asked the participant to find each target object twice (e.g., “Where’s the ____?” or “Find the ____.”) with all target objects and a standard collection of four additional unfamiliar objects (i.e., not target objects) present and provided feedback on the participant’s accuracy. After a participant reached criterion of 75% accuracy across three consecutive sessions for the AO or AV pseudoword set in the DV probe, instruction was provided only in the unmastered condition. Intervention ceased for the mastered condition. The comparison phase ended when both conditions reached the predetermined criterion. Maintenance phase Maintenance data were collected using baseline procedures 1 week and 2 weeks after the last comparison phase session. Maintenance DV probes were administered in the same manner as those in the baseline phase. Procedural fidelity Prior to initiating intervention, the examiner achieved procedural fidelity of at least 90% accuracy across two consecutive sessions with young children not participating in the study. Throughout the study, a graduate research assistant (RA) coded at least 25% of sessions across each phase and condition for each participant. All sessions were video-recorded, and then the graduate RA randomly selected sessions via a random number generator. The examiner was blind to which sessions were going to be selected for procedural fidelity coding. Using direct observation of video-recorded sessions, the RA collected procedural fidelity data for the independent variables and numerous control variables (i.e., behaviors that should remain constant across conditions; Gast, 2014) on an electronic spreadsheet. Procedural fidelity behaviors for probes were consistent across all phases. Only the independent variable (i.e., presence or absence of the speech hoop) varied across the AO and AV intervention conditions’ procedural fidelity behaviors. Complete descriptions are available in the study’s procedure manual available upon request from the first author. Procedural fidelity was analyzed formatively at the behavior level. Across all participants, the examiner demonstrated a mean of 99.8% (range: 93–100%) accuracy for administering probes and 98.4% (range: 90–100%) and 98.6% (range: 94–100%) accuracy for AO and AV intervention procedures, respectively. As an additional evaluation of consistency across conditions and to ensure no systematic bias in salience of pseudoword presentation, the duration of target pseudowords in each condition during DV probes and AO and AV intervention conditions was calculated using Audacity 2.1.2 software for two to three randomly selected sessions per participant (at least 12.5% of each participant’s sessions). The duration of pseudowords was very similar across conditions for each participant during probes and intervention. The mean duration for target pseudowords in the AO and AV DV probes was .71 s (SD = .031 s) and .72 s (SD = .027 s), respectively. The mean duration for target pseudowords in the AO and AV intervention conditions was .59 s (SD = .025 s) and .59 s (SD = .031 s), respectively. Interobserver Agreement A secondary coder (e.g., graduate RA) independently scored the DV probe for at least 25% of sessions in each condition for each participant for interobserver agreement (IOA). The primary coder was blind to which sessions were to be coded for IOA. The secondary coder was trained to .90 point-by-point agreement with the primary coder for independent coding of the participant’s accuracy responding to the DV probes for three consecutive sessions via video recordings of non-participants (Ayres & Ledford, 2014). Primary and secondary coders completed regular discrepancy discussions. Overall mean IOA was .99 (range: .90–1.00). The mean IOA was .99 (range: .97–1.00) for the baseline phase, .99 (range: .90–1.00) for the comparison phase, and .99 (range: .96–1.00) for the maintenance phase. These IOA are high and well above the recommended standard (i.e., .80; Horner et al., 2005). Analysis Approach Results were interpreted via visual analysis (visual inspection), the primary method for analyzing single case research data (Gast & Spriggs, 2014). As described in the What Works Clearinghouse (WWC) Procedures and Standards Handbook, Version 3.0, multiple features were examined during visual analysis to determine the presence of a functional relation between the independent variable and the DV: level, trend, variability, overlap, and consistency of data in similar phases (What Works Clearinghouse, 2014). Although WWC does not provide explicit standards for the AATD, principles from other relevant SCRDs and guidelines from experts (e.g., Ledford & Gast, 2018) were applied. We used a mastery criterion of at least 75% accuracy across three consecutive sessions for each pseudoword set. Because we used pseudowords, the participants cannot learn the pseudowords outside of intervention. Thus, participants were only expected to learn pseudowords in the AV and AO sets, not the control set. Results Effectiveness of Audiovisual and Auditory-Only Intervention Conditions (Research Question A) Evidence for the effectiveness of the AO and AV intervention conditions is drawn primarily from comparing the level and trend of the AO and AV intervention conditions to the control (no-teaching) condition in the comparison phase. As seen in Figures 1–4, data from Participants 1, 2, and 4 achieved the mastery criterion (i.e., at least 75% accuracy across three consecutive sessions) for the AO and AV sets and show a clear differentiation in level and trend for the AO and AV sets from the control set. Data from these three participants show higher accuracy and increasing trends for the AO and AV sets as compared with lower accuracy and a flat (i.e., not increasing) trend for the control set. These data indicate strong evidence of a functional (causal) relation between the AO and AV interventions and increased identification of target pseudowords receptively, as predicted. Data from Participant 3 exhibit moderate evidence of a functional relation between the AO and AV interventions and accuracy identifying target pseudowords. Although there is a small difference in level between the AO and AV sets relative to the control set, there is a flat rather than an increasing trend across the comparison phase for the AO and AV sets indicating a consistent but relatively weaker functional relation as compared to the other participants. Participants 1, 2, and 4 all reached mastery criterion (i.e., 75% accuracy identifying target pseudowords across three consecutive sessions) for the AO and AV sets. Nonetheless, they differed substantially in the number of sessions required to reach mastery (range: 4–18 sessions) indicating variation in learning rates across participants. Participants 1 and 4 exhibited rapid increases in accuracy with limited variability. Participant 2 required more than twice as many sessions to reach mastery and presented with substantial session-to-session variability. Maintenance data provide information about the degree to which participants were able to identify taught pseudowords 1 and 2 weeks after intervention ended without additional instruction (i.e., short-term retention). Maintenance data across participants indicate moderate evidence of retention for taught pseudowords based on differentiation between the AV and AO sets from the control set during the maintenance phase. Participant 1 At baseline, Participant 1 exhibited low accuracy identifying pseudowords for the AO and AV sets as shown in Figure 1. Although he achieved 50% accuracy for the control set at Session 1, the decreasing, counter-therapeutic trend (i.e., moving in the opposite direction as expected to occur with the onset of intervention) strongly suggests those results were due to chance. That is, the initial variation in performance is likely the result of random responses, which is not uncommon in studies involving comprehension of pseudowords. Participant 1 demonstrated a shift in level after the first intervention session for AO and AV sets while the control set continued to exhibit a flat slope. Data for the AO and AV sets show an accelerating trend with a moderate, positive slope. Participant 1’s accuracy declined from the prior session on only one session. Accuracy for the control set remained at 0–25% accuracy. Even though the target pseudowords were randomly paired with unfamiliar objects, participants were expected to identify some objects accurately because they were identifying objects from a closed set. It is important to recall that because there were four trials per set, participants could only achieve scores of 0%, 25%, 50%, 75%, and 100% accuracy. There is a difference in level and trend between the AO and AV sets and the control set during the comparison phase, indicating a functional relation between the AO and AV interventions and increased identification of target pseudowords receptively, as predicted. Participant 1’s accuracy for the AO and AV sets exceeded the control set across the maintenance period, which spanned four weeks. Participant 2 As shown in Figure 2, Participant 2 demonstrated low, stable performance in the baseline phase. During the comparison phase, she exhibited substantial variability in accuracy across sessions. Participant 2 remained seated and responded at an appropriate pace without overt signs of inattention or impulsivity during the DV probes. She was observed to label some items correctly spontaneously as the examiner set up the DV probe, but then respond incorrectly when asked to give them to the examiner. This behavior and Participant’s 2 observed variability highlight the need to consider performance and measurement error when interpreting assessment results of young children. Participant 2 achieved 50% accuracy twice for the control set; however, accuracy fell to 0% accuracy on the next session each time. Thus, chance rather than learning is the most likely explanation for this control set performance. Nonetheless, out of 10 sessions, her performance for the AO and AV sets exceeds that for the control set in eight and seven sessions, respectively. Further, a clear differentiation in level and trend from the control set is evident beginning at Session 18 for the AO set and Session 17 for the AV set. Participant 2 achieved the mastery criterion in 18 comparison phase sessions for the AO and AV sets. These data indicate a functional relation between the AO and AV interventions and increased accuracy identifying target pseudowords receptively, as predicted. During the maintenance phase, Participant 2 achieved 25–75% accuracy for AO and AV sets and 0% accuracy for the control set, which provides weak evidence of retention. Participant 3 During the baseline phase, Participant 3 achieved 0% to 25% accuracy for the AO and control sets, as shown in Figure 3. Because he achieved 50% accuracy for the AV set for Session 3, the baseline phase was extended by one session during which his performance returned to 0% accuracy. Participant 3, the youngest participant, presented with notably greater need for redirection and more direct requests for his attention prior to administering exposures during intervention than the other participants. Although accuracy for the AO and AV sets exceeds the control set for most sessions in the comparison phase, the trend is flat rather than increasing. Because of this trend, intervention was discontinued and no maintenance data were collected. Thus, Participant 3’s data provide moderate evidence of a functional relation between the AO and AV interventions and the DV. Participant 4 As shown in Figure 4, Participant 4 demonstrated low, stable performance in the baseline phase. During the comparison phase, he exhibited a shift in level after three and one intervention sessions for the AO and AV sets, respectively. Data for the AO and AV sets show an accelerating trend with moderate slope and minimal variability. His accuracy only declined on one session from the immediately preceding session. Accuracy for the control set remained at 0–25%. The clear difference in level and trend between the control set and the AO and AV sets indicates a functional relation between the AO and AV interventions and increased identification of target pseudowords receptively. Participant 4 achieved 75–100% accuracy in the maintenance phase for AO and AV sets relative to the control set at 0–25% accuracy, indicating strong retention. In summary, three participants provide strong evidence in support of the hypothesis that participants would learn pseudowords under AO and AV intervention conditions. The fourth participant (i.e., Participant 3) provided moderate evidence in support of this hypothesis. Efficiency of Audiovisual and Auditory-Only Intervention Conditions (Research Question B) None of the participants exhibited a differential rate of word learning in the AO versus AV intervention conditions as shown in Figures 1–4. Because the participants achieved the mastery criterion close in time across intervention conditions and exhibited similar trends with overlapping data points between AO and AV sets, their data do not provide evidence of superior efficiency for either condition. The data are consistent with the null hypothesis of approximately equal word learning efficiency in the AO and AV intervention conditions. There was no inhibition evident when providing access to visual cues alongside auditory input nor was there differential benefit for the AV condition on accuracy identifying target pseudowords presented without access to visual cues. Participants achieved the mastery criterion for AO and AV sets no more than two sessions apart and all exhibited overlapping data points. Thus, neither the hypothesis asserted by the multisensory theory nor the hypothesis asserted by the unisensory theory was supported. Figure 1. View largeDownload slide Participant 1. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Sessions 1–12 were completed three times per week. Sessions 13–15 were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 1. View largeDownload slide Participant 1. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Sessions 1–12 were completed three times per week. Sessions 13–15 were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 2. View largeDownload slide Participant 2. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 2. View largeDownload slide Participant 2. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 3. View largeDownload slide Participant 3. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. The horizontal dotted line denotes the 75% criterion level. Figure 3. View largeDownload slide Participant 3. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. The horizontal dotted line denotes the 75% criterion level. Figure 4. View largeDownload slide Participant 4. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Figure 4. View largeDownload slide Participant 4. Percent accuracy on auditory-only probes of target pseudowords; AO = auditory-only; AV = audiovisual. Baseline and comparison sessions were completed three times per week. Maintenance sessions were completed one time per week. The horizontal dotted line denotes the 75% criterion level. Discussion Effectiveness of Audiovisual and Auditory-Only Intervention Conditions (Research Question A) The replicated effect for the effectiveness of the AO and AV intervention conditions relative to the control set indicates that participants learned taught pseudowords with and without access to visual cues and did not learn pseudowords not taught (i.e., control set pseudowords). No probable threats to internal validity that would explain the results were identified. Thus, learning can be reasonably attributed to the intervention. Results yield strong evidence that three of the four participants were able to pair pseudowords and unfamiliar objects following direct instruction for the objects’ labels. Our findings are consistent with our hypothesis and prior findings indicating that children with hearing loss can learn words with explicit instruction (e.g., Houston et al., 2005; Lund & Douglas, 2016; Willstedt-Svensson, Löfqvist, Almqvist, & Sahlén, 2004). The participants’ performance patterns highlight the variability across children with hearing loss, even when they share a number of characteristics (e.g., prelingual hearing loss, consistent hearing technology use, and same specialized preschool). Variation within and across young children in their responses to interventions ought to be expected and addressed clinically through data-informed decisions in special education because demographic and common child characteristics may not readily account for these variations (Ledford et al., 2016). In future, studies factors that explain a substantial portion of the variance in response may be identified. Efficiency of Audiovisual and Auditory-Only Intervention Conditions (Research Question B) None of the participants exhibited a differential effect between AO and AV sets for word learning efficiency during probes administered without access to visual cues. The three participants who reached mastery criterion did so for both intervention sets. Further, they achieved the mastery criterion no more than two sessions apart and exhibited similar trends in the comparison phase for each set. Therefore, neither the pattern predicted by the unisensory theory nor the one predicted by the multisensory theory was observed. Recall that all pseudowords were tested without visual cues to provide equivalent testing conditions even though they were taught under different conditions. Although the teaching and testing procedures align for the AO intervention condition, for the AV intervention condition participants were taught with access to speechreading cues and tested without access to speechreading cues. Blocking visual cues during the probes eliminated the possibility of participants being exposed to visual input for the AO pseudowords during testing, provided an appropriately conservative test of AV input benefits, and directly addressed whether AV input inhibits word learning for identifying words when speechreading cues are not available. The AO intervention condition can be viewed as unusual and “unnatural” from a social-pragmatic perspective (see Rhoades et al., 2016). Thus, strong positive evidence is needed to support its implementation. That is, one could argue that the AO intervention condition be recommended only if the socially and pragmatically appropriate AV intervention condition is inhibitory. The results of this study, and the extant literature thus far, do not provide such evidence. Additionally, data from the participants did not indicate a benefit of AV input for efficiency of word learning as would be expected by the multisensory position. The results could be partially related to task difficulty level. To optimize the learning conditions for an arguably fair comparison between the AO and AV intervention conditions, the intervention sessions were conducted in a quiet room with minimal background noise. That is, we chose to ensure optimal access to the auditory signal to test the AO intervention condition under ideal listening conditions. Finally, only the examiner and participant were present to simulate a common therapy session scenario. Indeed, the extant literature on speech perception clearly indicates that AV input results in greater benefit over AO input at poorer signal-to-noise conditions (e.g., Sumby & Pollack, 1954) than were employed herein. Participants may exhibit a differential response in conditions with a poorer signal-to-noise ratio, such as classroom, home, or other less controlled natural language-learning settings wherein auditory input is attenuated. Limitations Before discussing the implications of this study and future directions, several limitations should be acknowledged. First, because the examiner conducted both the assessment and intervention portions of the study, she was not blind to condition when scoring. This scenario can result in detection bias (i.e., systematic differences between conditions in determining outcomes; Cochrane, 2017). Having separate, blinded examiners for the assessment and intervention were not feasible for this study, but should be considered in future studies. In the present study, the high point-by-point IOA reduces the likelihood of detection bias being present. Second, the study included only four children and the external validity of the results is limited by the characteristics of those four participants. The same results may not be found for all children with hearing loss. It is unknown whether the same results would be found for children at other ages, those with higher or lower language levels, or those with different hearing histories, including children who have had access to sound for shorter periods of time. The specific procedures are expected to require adaptations for children at different ages and/or language levels. Even though study procedures were designed to be a salient, simple word learning paradigm, the difficulty level was beyond the current abilities of one participant. This study focused on fast mapping (i.e., the ability to link a word to its referent after only a few exposures, Carey & Bartlett, 1978) and, to a limited degree, retention for word learning. It did not address word extension or other aspects of semantic knowledge, which are other components of vocabulary knowledge (Walker & McGregor, 2013). In addition, we only tested receptive, not expressive, performance in the word learning task. These aspects of word learning should be addressed in future studies, particularly in light of the identified weaknesses in lexical organization, including taxonomic, semantic, and phonological organization, in children with hearing loss (Lund & Dinsmoor, 2016; Wechsler-Kashi, Schwartz, & Cleary, 2014). It is important to test whether tasks requiring more complex semantic knowledge (e.g., relations between words or more abstract concepts) or producing words independently will reveal differences in learning efficiency under AO versus AV interventions. Strengths Five strengths should be acknowledged. First, the use of pseudowords enabled tight control of the number of exposures to the target words and strict balancing of the acoustic features of the words across the AO and AV intervention conditions. When using real words, it is impossible to control for the number of prior exposures participants have had with a given word and for partial understanding of the words, even for words they do not identify or use correctly. Use of pseudowords ameliorates this concern. In addition, neither caregivers nor teachers were aware of which words participants were being taught. Second, numerous steps were taken, including use of pseudowords, to ensure equal difficulty across word sets, which is a critical characteristic for AATD studies. No evidence of unbalanced sets was identified. Third, participants’ ability to identify target words were tested at the beginning of each session rather than the end to provide a more distal measure of word learning that more closely approximates what children must do to build their vocabulary skills. Being able to identify a word immediately after instruction but not later in the day or week offers little value to children in their everyday settings. Additionally, the target pseudowords were taught until the participant reached a predetermined criterion for three consecutive days, rather than conducting a set number of sessions, which may more closely align with the goal structure in educational and therapeutic settings at least for some children with hearing loss. Fourth, including a control set, which is not required with a AATD study, permitted evaluation of the effectiveness of the AO and AV intervention conditions and increased the internal validity of the study design. Fifth, high-procedural fidelity across all participants and phases supports the study’s internal validity. Theoretical and Clinical Implications Our findings do not support a unisensory (auditory-only) theoretical, but heretofore untested, assertion that visual access provided alongside auditory input differentially inhibits auditory processing in children with hearing loss for a word learning task compared with conditions that isolate the auditory sense (Pollack, 1970). We did not observe any detrimental effects of permitting simultaneous visual access to speechreading cues during the AV intervention condition on the participants’ performance even when the probe blocked access to these visual cues. This theoretical implication ought to be further explored in future studies and applied educationally if replicated and extended across learning conditions. Importantly, the results do not provide evidence supporting the use of eliminating visual access to the speaker’s mouth and throat when teaching spoken words to children with hearing loss to increase the efficiency of word learning. Participants demonstrated the ability to identify taught pseudowords in assessments that blocked access to speechreading cues regardless of whether they were permitted access to these cues during teaching sessions. The “hand cue” may be used to block access to visual speechreading cues. When using the hand cue, the speaker covers his or her mouth while speaking to children with hearing loss in an attempt to focus the child’s attention and skills on auditory cues rather than visual cues by physically restricting access to seeing the teacher’s mouth (Estabrooks, 1998). Our findings may be added to the list of reasons Rhoades et al. (2016) expressed for explaining why use of “the hand cue (or any substitute for it) is no longer recommended by AVT [Auditory-Verbal Therapy]” (p. 287). Although Rhoades et al. (2016) are appropriately critical of the lack of evidence supporting the use of the hand cue, they rely on indirect evidence from studies of children and adults with typical hearing and studies using speech recognition or non-language tasks, rather than language-learning tasks (Rhoades et al., 2016). The direct evidence from the current study bolsters their recommendation to eliminate use of the hand cue. Even though we did not use the hand cue in the current study, the use of a speech hoop should be viewed as a “substitute for it.” Like the hand cue, use of the speech hoop is intended to block access to speechreading cues and focus the child’s attention on the auditory stimuli alone. The current study’s evidence could also apply more broadly to other practices of restricting or eliminating access to visual cues. Note that the recommendation to refrain from prohibiting access to speechreading cues applies to intervention, not assessment. Evaluating the degree to which a child can perform a given task with versus without access to speechreading cues can provide clinically valuable information for treatment planning, clinical recommendations, and determining candidacy for hearing technology include cochlear implantation and assistive listening devices such as digital modulation (DM) or frequency modulation (FM) systems. In addition, the current study did not identify the beneficial effect of access to speechreading cues predicted by the multisensory position. It is possible that children with hearing loss would exhibit more rapid word learning in an AV intervention condition relative to an AO intervention condition if intervention occurred in a room with a poorer signal-to-noise ratio, similar to the acoustic environment of many classrooms. Nonetheless, one must consider very carefully whether prohibiting access to speechreading cues during intervention is most beneficial in light of the current evidence. With adequate support through collaboration with researchers and professional development, clinicians could use SCRD principles when collecting data in varying conditions to determine whether a given child demonstrates a differential response to either AO or AV input for a particular skill. Future Directions This study provides evidence of successful word learning in AO and AV intervention conditions for preschool children with hearing loss. Continued work is needed to determine whether one or both of these conditions results in greater learning efficiency for certain tasks for at least some children with hearing loss. Most immediately, future studies similar to the current study should be conducted (a) with words taught under a poorer signal-to-noise ratio that simulates the acoustic environment of a typical classroom but tested in a quiet environment, (b) with learning tasks of varying difficulty levels, (c) with learning contexts that more closely approximate classroom instruction, and (d) with participants of different chronological ages and language levels. To understand word learning skills of younger children with hearing loss and those with lower language levels under AO and AV intervention conditions, less demanding tasks and more proximal measures should be considered. These studies could contribute to identifying the active ingredients in word learning interventions for children with hearing loss. Future studies will need to address whether and how pretreatment characteristics (e.g., age, language abilities, audiological history and skills, and multisensory processing skills) influence differential responses to AO versus AV intervention conditions to individualize intervention. Using pretreatment characteristics offers the potential to match individuals efficiently and effectively with specific interventions or intervention features. Differences in auditory access, time post cochlear implantation, and length and/or degree of auditory deprivation might all influence an individual’s response to AO versus AV input. Individual differences in multisensory processing abilities have been documented, but the mechanisms responsible for such differences are not yet well understood (Murray et al., 2016; Stevenson et al., 2017). Measures of multisensory integration may provide an avenue for capturing differences in how children with hearing loss respond to AO versus AV intervention conditions proximally and over time. Use of multisensory integration measures will require (a) continued research of multisensory integration development in children with and without hearing loss, (b) thorough evaluations of the reliability and construct validity of specific tasks, and (c) development of tasks appropriate for individuals with a wide variety of language and developmental levels. Of note, in the current study all participants completed the multimodal word recognition task and two of four completed the McGurk task with little to no instruction. The current study is not designed to be able to explain variation in performance on multisenseory assessments. In the future, large randomized controlled trials could evaluate differences in behavioral (e.g., performance on learning tasks) and neural outcomes (e.g., degree of enhancement with multisensory versus unisensory stimuli) of young children who receive intervention in either AO or AV presentation formats. Such studies could evaluate moderated differences in AO and AV intervention conditions based on pretreatment characteristics. Conclusion This study provides an early step in testing two competing viewpoints on how to develop spoken language and multisensory processing skills in children with hearing loss. It is the first known study to provide direct evidence for the effectiveness and efficiency of word learning instruction in AO and AV intervention conditions for children with hearing loss. Perhaps the most striking finding is that even under optimal listening conditions, there was no inhibitory effect of access to visual speechreading cues on word learning. This finding was seen even though word learning was assessed in an AO presentation format, an optimal test of whether auditory learning had occurred. Following instruction in AO and AV intervention conditions, participants accurately identified taught pseudowords during AO probes with similar rates of learning. Therefore, these findings do not support the widespread practice of prohibiting access to visual speechreading cues in order to isolate auditory input to increase the rate of word learning, even when such learning is subsequently assessed without access to visual cues. Future research is required to replicate the findings and to determine whether the same pattern of results is observed for children with hearing loss with different audiological and language profiles, for tasks of varying difficulty levels, and across different timespans. Notes J.M., MS, PhD student, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN; S.C., PhD, Professor, Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN; P.Y., PhD, Professor, Department of Special Education, Vanderbilt University, Nashville, TN. J.M. conceived of the study, completed data collection, participated in the interpretation of the data, and drafted the manuscript; S.C. participated in the study design and data collection, helped interpret the data, and helped draft the manuscript; P.Y. participated in the design of the study, interpretation of the data, and drafting the manuscript. All authors read and approved the final manuscript. Funding This work was supported by the American Speech-Language-Hearing Foundation [2016 Student Research Grant in Early Childhood Language Development]; the National Center for Advancing Translational Sciences of the National Institute of Health [UL1 TR000445]; the United States Department of Education [Preparation of Leadership Personnel grant H325D140087]; the Eunice Kennedy Shriver National Institute of Child Health and Human Development of the National Institutes of Health [U54HD083211]; and the Scottish Rite Foundation of Nashville. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the funding agencies. Conflict of Interest The authors have no conflicts of interest to disclose. Acknowledgments We thank the families who participated in our study and the teachers who collaborated with us to make this study possible. We thank Iliza Butera for her assistance with the multisensory tasks and Michael Douglas for sharing his expertise in providing speech, language, and educational services to children with hearing loss. We also gratefully acknowledge René Gifford for sharing her expertise in pediatric cochlear implantation and for providing comments on an earlier draft of this manuscript, Blair Lloyd for sharing her expertise in SCRD methods, and Melody Sun for serving as a research assistant. References Anderson , K. L. ( 2015 ). Access is the issue, not hearing loss: New policy clarification requires schools to ensure effective communication access . SIG 9 Perspectives on Hearing and Hearing Disorders in Childhood , 25 , 24 – 36 . doi:10.1044/hhdc25.1.24 . Google Scholar CrossRef Search ADS Ayres , K. , & Ledford , J. R. ( 2014 ). Dependent measures and measurement systems. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: applications in special education and behavioral sciences ( 2nd ed. , pp. 124 – 153 ). New York, NY : Routledge . Baum , S. H. , Stevenson , R. A. , & Wallace , M. T. ( 2015 ). Testing sensory and multisensory function in children with autism spectrum disorder . Journal of Visualized Experiments , e52677 . doi:10.3791/52677 . Bergeson , T. R. , Pisoni , D. B. , & Davis , R. A. O. ( 2005 ). Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants . Ear and Hearing , 26 , 149 – 164 . doi:10.1097/00003446-200504000-00004 . Google Scholar CrossRef Search ADS Bergeson , T. , Pisoni , D. B. , & Davis , R. A. ( 2003 ). A longitudinal study of audiovisual speech perception by children with hearing loss who have cochlear implants . Volta Review , 103 , 347 – 370 . Bergeson , T. , & Pisoni , D. ( 2004 ). Audiovisual speech perception in deaf adults and children following cochlear implantation. In Calvert , G. , Spence , C. , & Stein , B. E. (Eds.) , The handbook of multisensory processes (pp. 749 – 772 ). Cambridge, MA : MIT Press . Bernstein , L. E. , Eberhardt , S. P. , & Auer , E. T. , Jr ( 2014 ). Audiovisual spoken word training can promote or impede auditory-only perceptual learning: Prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults . Frontiers in Psychology , 5 , 934 . doi:10.3389/fpsyg.2014.00934 . Google Scholar CrossRef Search ADS Bernthal , J. E. , Bankson , N. W. , & Flipsen , P. , Jr ( 2009 ). Articulation and phonological disorders: Speech sound disorders in children ( 6th ed. ). Boston, MA : Pearson . Camarata , S. M. , & Leonard , L. B. ( 1985 ). Young children pronounce nouns more accurately than verbs: Evidence for a semantic-phonological interaction . Papers and Reports on Child Language Development , 24 , 38 – 45 . Campbell , R. ( 2008 ). The processing of audio-visual speech: Empirical and neural bases . Philosophical Transactions of the Royal Society of London B: Biological Sciences , 363 , 1001 – 1010 . doi:10.1098/rstb.2007.2155 . Google Scholar CrossRef Search ADS Cannon , J. E. , Guardino , C. , Antia , S. D. , & Luckner , J. L. ( 2016 ). Single-case design research: Building the evidence-base in the field of education of deaf and hard of hearing students . American Annals of the Deaf , 160 , 440 – 452 . doi:10.1353/aad.2016.0007 . Google Scholar CrossRef Search ADS Carey , S. , & Bartlett , E. ( 1978 ). Acquiring a single new word . Papers and Reports on Child Language Development at Stanford Child Language Conference , 15 , 17 – 29 . Carrow-Woolfolk , E. , & Allen , E. ( 2014 a). Test of auditory comprehension of language—fourth edition (TACL-4) . Austin, TX : Pro-Ed . Carrow-Woolfolk , E. , & Allen , E. ( 2014 b). Test of expressive language (TEXL) . Austin, TX : Pro-Ed . Cochrane . ( 2017 ). Assessing risk of bias in included studies. Retrieved from http://methods.cochrane.org/bias/assessing-risk-bias-included-studies Contrera , K. J. , Choi , J. S. , Blake , C. R. , Betz , J. F. , Niparko , J. K. , & Lin , F. R. ( 2014 ). Rates of long-term cochlear implant use in children . Otology & Neurotology , 35 , 426 – 430 . doi:10.1097/MAO.0000000000000243 . Google Scholar CrossRef Search ADS Convertino , C. , Borgna , G. , Marschark , M. , & Durkin , A. ( 2014 ). Word and world knowledge among deaf learners with and without cochlear implants . Journal of Deaf Studies and Deaf Education , 19 , 471 – 483 . doi:10.1093/deafed/enu024 . Google Scholar CrossRef Search ADS Davidson , L. S. , Geers , A. E. , & Nicholas , J. G. ( 2014 ). The effects of audibility and novel word learning ability on vocabulary level in children with cochlear implants . Cochlear Implants International , 15 , 211 – 221 . doi:10.1179/1754762813Y.0000000051 . Google Scholar CrossRef Search ADS de Villers-Sidani , E. , Chang , E. F. , Bao , S. , & Merzenich , M. M. ( 2007 ). Critical period window for spectral tuning defined in the primary auditory cortex (A1) in the rat . Journal of Neuroscience , 27 , 180 – 189 . doi:10.1523/JNEUROSCI.3227-06.2007 . Google Scholar CrossRef Search ADS Doehring , D. G. , & Ling , D. ( 1971 ). Programmed instruction of hearing-impaired children in the auditory discrimination of vowels . Journal of Speech, Language, and Hearing Research , 14 , 746 – 754 . doi:10.1044/jshr.1404.746 . Google Scholar CrossRef Search ADS Dunn , L. M. , & Dunn , D. M. ( 2007 ). Peabody picture vocabulary test, fourth edition (PPVT-4) . Minneapolis, MN : Pearson Assessments . Ehrler , D. , & McGhee , R. ( 2008 ). Primary test of nonverbal intelligence (PTONI) . Austin, TX : Pro-Ed . Erber , N. P. ( 1969 ). Interaction of audition and vision in the recognition of oral speech stimuli . Journal of Speech, Language, and Hearing Research , 12 , 423 – 425 . doi:10.1044/jshr.1202.423 . Google Scholar CrossRef Search ADS Erber , N. P. ( 1972 ). Auditory, visual, and auditory-visual recognition of consonants by children with normal and impaired hearing . Journal of Speech, Language, and Hearing Research , 15 , 413 – 422 . doi:10.1044/jshr.1502.413 . Google Scholar CrossRef Search ADS Erber , N. P. ( 1979 ). Speech perception by profoundly hearing-impaired children . Journal of Speech and Hearing Disorders , 44 , 255 – 270 . doi:10.1044/jshd.4403.255 . Google Scholar CrossRef Search ADS Estabrooks , W. ( 1998 ). Cochlear implants for kids . Washington, DC : Alex Graham Bell Association for Deaf . Estabrooks , W. (Ed.) ( 2001 ). 50 frequently asked questions about auditory-verbal therapy . Toronto, Ontario : Learning to Listen Foundation . Ewing , I. R. , Ewing , A. W. G. , & Cockersole , F. W. ( 1938 ). The handicap of deafness . British Journal of Educational Psychology , 8 , 307 – 312 . doi:10.1111/j.2044-8279.1938.tb03134.x . Google Scholar CrossRef Search ADS Fu , Q. J. , & Galvin , J. J. ( 2007 ). Perceptual learning and auditory training in cochlear implant recipients . Trends in Amplification , 11 , 193 – 205 . doi:10.1177/1084713807301379 . Google Scholar CrossRef Search ADS Fudala , J. B. ( 2000 ). Arizona articulation proficiency scale—third revision (Arizona–3) . Los Angeles, CA : Western Psychological Services . Gast , D. L. ( 2014 ). General factors in measurement and evaluation. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: Applications in special education and behavioral sciences ( 2nd ed. , pp. 85 – 104 ). New York, NY : Routledge . Gast , D. L. , & Spriggs , A. D. ( 2014 ). Visual analysis of graphic data. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: Applications in special education and behavioral sciences ( 2nd ed. , pp. 176 – 210 ). New York, NY : Routledge . Geers , A. E. ( 2002 ). Factors affecting the development of speech, language, and literacy in children with early cochlear implantation . Language, Speech, and Hearing Services in Schools , 33 , 172 – 183 . doi:10.1044/0161-1461(2002/015) . Google Scholar CrossRef Search ADS Geers , A. E. , Mitchell , C. M. , Warner-Czyz , A. , Wang , N.-Y. , Eisenberg , L. S. , & Team , C. I. ( 2017 ). Early sign language exposure and cochlear implantation benefits . Pediatrics , 140 , e20163489 . doi:10.1542/peds.2016-3489 . Google Scholar CrossRef Search ADS Geers , A. , Brenner , C. , & Davidson , L. ( 2003 ). Factors associated with development of speech perception skills in children implanted by age five . Ear and Hearing , 24 , 24S – 35S . doi:10.1097/01.AUD.0000051687.99218.0F . Google Scholar CrossRef Search ADS Ghazanfar , A. A. , & Schroeder , C. E. ( 2006 ). Is neocortex essentially multisensory? Trends in Cognitive Sciences , 10 , 278 – 285 . doi:10.1016/j.tics.2006.04.008 . Google Scholar CrossRef Search ADS Gilley , P. M. , Sharma , A. , Mitchell , T. V. , & Dorman , M. F. ( 2010 ). The influence of a sensitive period for auditory-visual integration in children with cochlear implants . Restorative Neurology and Neuroscience , 28 , 207 – 218 . doi:10.3233/RNN-2010-0525 . Glick , H. , & Sharma , A. ( 2017 ). Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications . Hearing Research , 343 , 191 – 201 . doi:10.1016/j.heares.2016.08.012 . Google Scholar CrossRef Search ADS Hillock , A. R. ( 2010 ). Developmental changes in the temporal window of auditory and visual integration (Order No. 3525505). Available from Dissertations & Theses @ Vanderbilt University; ProQuest Central; ProQuest Dissertations & Theses Global. (1037962194). Retrieved from http://login.proxy.library.vanderbilt.edu/login?url=https://search.proquest.com/docview/1037962194?accountid=14816 Holt , R. F. , Kirk , K. I. , & Hay-McCutcheon , M. ( 2011 ). Assessing multimodal spoken word-in-sentence recognition in children with normal hearing and children with cochlear implants . Journal of Speech, Language, and Hearing Research , 54 , 632 – 657 . doi:10.1044/1092-4388(2010/09-0148) . Google Scholar CrossRef Search ADS Horner , R. H. , Carr , E. G. , Halle , J. , McGee , G. , Odom , S. , & Wolery , M. ( 2005 ). The use of single-subject research to identify evidence-based practice in special education . Exceptional Children , 71 , 165 – 179 . doi:10.1177/001440290507100203 . Google Scholar CrossRef Search ADS Houston , D. M. , Beer , J. , Bergeson , T. R. , Chin , S. B. , Pisoni , D. B. , & Miyamoto , R. T. ( 2012 ). The ear is connected to the brain: Some new directions in the study of children with cochlear implants at Indiana University . Journal of the American Academy of Audiology , 23 , 446 – 463 . doi:10.3766/jaaa.23.6.7 . Houston , D. M. , Carter , A. K. , Pisoni , D. B. , Kirk , K. I. , & Ying , E. A. ( 2005 ). Word learning in children following cochlear implantation . Volta Review , 105 , 41 – 72 . Houston , D. M. , Stewart , J. , Moberly , A. , Hollich , G. , & Miyamoto , R. T. ( 2012 ). Word learning in deaf children with cochlear implants: Effects of early auditory experience . Developmental Science , 15 , 448 – 461 . doi:10.1111/j.1467-7687.2012.01140.x . Google Scholar CrossRef Search ADS Johnson , C. , & Goswami , U. ( 2010 ). Phonological awareness, vocabulary, and reading in deaf children with cochlear implants . Journal of Speech, Language, and Hearing Research , 53 , 237 – 261 . doi:10.1044/1092-4388(2009/08-0139) . Google Scholar CrossRef Search ADS Kaufman , A. S. , & Kaufman , N. L. ( 2004 ). Kaufman brief intelligence test, second edition . Bloomington, MN : Pearson, Inc . Kirk , K. I. , Hay-McCutcheon , M. J. , Holt , R. F. , Gao , S. , Qi , R. , & Gehrlein , B. L. ( 2007 ). Audiovisual spoken word recognition by children with cochlear implants . Audiological Medicine , 5 , 250 – 261 . doi:10.1080/16513860701673892 . Google Scholar CrossRef Search ADS Kirk , K. I. , Pisoni , D. B. , & Lachs , L. ( 2002 ). Audiovisual integration of speech by children and adults with cochlear implants. Proceedings of the 7th International Conference on Spoken Language Processing, 1689–1692. Kral , A. , Tillein , J. , Heid , S. , Hartmann , R. , & Klinke , R. ( 2005 ). Postnatal cortical development in congenital auditory deprivation . Cerebral Cortex , 15 , 552 – 562 . Google Scholar CrossRef Search ADS Kral , A. , Tillein , J. , Heid , S. , Klinke , R. , & Hartmann , R. ( 2006 ). Cochlear implants: Cortical plasticity in congenital deprivation . Progress in Brain Research , 157 , 283 – 402 . doi:10.1093/cercor/bhh156 . Google Scholar CrossRef Search ADS Lachs , L. , Pisoni , D. B. , & Kirk , K. I. ( 2001 ). Use of audiovisual information in speech perception by prelingually deaf children with cochlear implants: A first report . Ear and Hearing , 22 , 236 – 251 . doi:10.1097/00003446-200106000-00007 . Google Scholar CrossRef Search ADS Lederberg , A. R. , Prezbindowski , A. K. , & Spencer , P. E. ( 2000 ). Word-learning skills of deaf preschoolers: The development of novel mapping and rapid word-learning strategies . Child Development , 71 , 1571 – 1585 . doi:10.1111/1467-8624.00249 . Google Scholar CrossRef Search ADS Lederberg , A. R. , & Spencer , P. E. ( 2001 ). Vocabulary development of deaf and hard of hearing children. In Clark M. D. , Marschark M. , & Karchmer M. (Eds.) , Context, cognition, and deafness (pp. 88 – 112 ). Washington, DC : Gallaudet University Press . Lederberg , A. R. , & Spencer , P. E. ( 2009 ). Word learning abilities in deaf and hard-of-hearing preschoolers: Effect of lexicon size and language modality . Journal of Deaf Studies and Deaf Education , 14 , 44 – 62 . doi:10.1093/deafed/enn021 . Google Scholar CrossRef Search ADS Ledford , J. R. , Barton , E. E. , Hardy , J. K. , Elam , K. , Seabolt , J. , Shanks , M. ,… Kaiser , A. ( 2016 ). What equivocal data from single case comparison studies reveal about evidence-based practices in early childhood special education . Journal of Early Intervention , 38 , 79 – 91 . doi:10.1177/1053815116648000 . Google Scholar CrossRef Search ADS Ledford , J. R. , & Gast , D. L. (Eds.) ( 2018 ). Single case research methodology: Applications in special education and behavioral sciences ( 3rd ed. ). New York, NY : Routledge . Leonard , L. B. , Schwartz , R. G. , Chapman , K. , Rowan , L. E. , Prelock , P. A. , Terrell , B. ,… Messick , C. ( 1982 ). Early lexical acquisition in children with specific language impairment . Journal of Speech, Language, and Hearing Research , 25 , 554 – 564 . doi:10.1044/jshr.2504.554 . Google Scholar CrossRef Search ADS Lewkowicz , D. J. ( 2010 ). Infant perception of audio-visual speech synchrony . Developmental Psychology , 46 , 66 – 77 . doi:10.1037/a0015579 . Google Scholar CrossRef Search ADS Ling , A. H. ( 1976 ). Training of auditory memory in hearing-impaired children: Some problems of generalization . Ear and Hearing , 1 , 150 – 157 . Ling , D. ( 1989 ). Foundations of spoken language for hearing impaired children . Washington, DC : Alexander Graham Bell Association for the Deaf . Luckner , J. L. , & Cooke , C. ( 2010 ). A summary of the vocabulary research with students who are deaf or hard of hearing . American Annals of the Deaf , 155 , 38 – 67 . Google Scholar CrossRef Search ADS Lund , E. ( 2016 ). Vocabulary knowledge of children with cochlear implants: A meta-analysis . Journal of Deaf Studies and Deaf Education , 21 , 107 – 121 . doi:10.1093/deafed/env060 . Google Scholar CrossRef Search ADS Lund , E. , & Dinsmoor , J. ( 2016 ). Taxonomic knowledge of children with and without cochlear implants . Language, Speech, and Hearing Services in Schools , 47 , 236 – 245 . doi:10.1044/2016_LSHSS-15-0032 . Google Scholar CrossRef Search ADS Lund , E. , & Douglas , W. M. ( 2016 ). Teaching vocabulary to preschool children with hearing loss . Exceptional Children , 83 , 26 – 41 . doi:10.1177/0014402916651848 . Google Scholar CrossRef Search ADS Lund , E. , Douglas , W. M. , & Schuele , C. M. ( 2015 ). Semantic richness and word learning in children with hearing loss who are developing spoken language: A single case design study . Deafness & Education International , 17 , 163 – 175 . doi:10.1179/155 7069×15Y.0000000004 . Google Scholar CrossRef Search ADS Lund , E. , & Schuele , C. M. ( 2014 ). Effects of a word-learning training on children with cochlear implants . Journal of Deaf Studies and Deaf Education , 19 , 68 – 84 . doi:10.1093/deafed/ent036 . Google Scholar CrossRef Search ADS Martin , N. A. , & Brownell , R. ( 2005 ). Test of auditory processing skills—third edition (TAPS-3) . Austin, TX : Pro-Ed . Marulis , L. M. , & Neuman , S. B. ( 2010 ). The effects of vocabulary intervention on young children’s word learning a meta-analysis . Review of Educational Research , 80 , 300 – 335 . doi:10.3102/0034654310377087 . Google Scholar CrossRef Search ADS Massaro , D. W. ( 1984 ). Children’s perception of visual and auditory speech . Child Development , 55 , 1777 – 1788 . Google Scholar CrossRef Search ADS Massaro , D. W. , & Light , J. ( 2004 ). Improving the vocabulary of children with hearing loss . Volta Review , 104 , 141 – 174 . Massaro , D. W. , Thompson , L. A. , Barron , B. , & Laren , E. ( 1986 ). Developmental changes in visual and auditory contributions to speech perception . Journal of Experimental Child Psychology , 41 , 93 – 113 . doi:10.1016/0022-0965(86)90053-6 . Google Scholar CrossRef Search ADS McDaniel , J. , & Camarata , S. ( 2017 ). Does access to visual input inhibit auditory development for children with cochlear implants? A review of the evidence . Perspectives of the ASHA Special Interest Groups , 2 , 10 – 24 . doi:10.1044/persp2.SIG9.10 . Google Scholar CrossRef Search ADS McGurk , H. , & MacDonald , J. ( 1976 ). Hearing lips and seeing voices . Nature , 264 , 746 – 748 . doi:10.1038/264746a0 . Google Scholar CrossRef Search ADS Murray , M. M. , Lewkowicz , D. J. , Amedi , A. , & Wallace , M. T. ( 2016 a). Multisensory processes: A balancing act across the lifespan . Trends in Neurosciences , 39 , 567 – 579 . doi:10.1016/j.tins.2016.05.003 . Google Scholar CrossRef Search ADS Murray , M. M. , Thelen , A. , Thut , G. , Romei , V. , Martuzzi , R. , & Matusz , P. J. ( 2016 b). The multisensory function of the human primary visual cortex . Neuropsychologia , 83 , 161 – 169 . doi:10.1016/j.neuropsychologia.2015.08.011 . Google Scholar CrossRef Search ADS Noreña , A. J. , & Eggermont , J. J. ( 2005 ). Enriched acoustic environment after noise trauma reduces hearing loss and prevents cortical map reorganization . Journal of Neuroscience , 25 , 699 – 705 . doi:10.1523/JNEUROSCI.2226-04.2005 . Google Scholar CrossRef Search ADS Ohde , R. N. , & Sharf , D. J. ( 1992 ). Phonetic analysis of normal and abnormal speech . Columbus, OH : Charles E. Merrill . Patterson , M. L. , & Werker , J. F. ( 2003 ). Two-month-old infants match phonetic information in lips and voice . Developmental Science , 6 , 191 – 196 . doi:10.1111/1467-7687.00271 . Google Scholar CrossRef Search ADS Pilling , M. , & Thomas , S. ( 2011 ). Audiovisual cues and perceptual learning of spectrally distorted speech . Language and Speech , 54 , 487 – 497 . doi:10.1177/0023830911404958 . Google Scholar CrossRef Search ADS Pisoni , D. B. , Cleary , M. , Geers , A. E. , & Tobey , E. A. ( 1999 ). Individual differences in effectiveness of cochlear implants in children who are prelingually deaf: New process measures of performance . Volta Review , 101 , 111 – 164 . Pollack , D. ( 1970 ). Educational audiology for the limited hearing child . Springfield, IL : Charles C. Thomas . Polley , D. B. , Steinberg , E. E. , & Merzenich , M. M. ( 2006 ). Perceptual learning directs auditory cortical map reorganization through top-down influences . Journal of Neuroscience , 26 , 4970 – 4982 . doi:10.1523/JNEUROSCI.3771-05.2006 . Google Scholar CrossRef Search ADS Qi , S. , & Mitchell , R. E. ( 2012 ). Large-scale academic achievement testing of deaf and hard-of-hearing students: Past, present, and future . Journal of Deaf Studies and Deaf Education , 17 , 1 – 18 . doi:10.1093/deafed/enr028 . Google Scholar CrossRef Search ADS Rhoades , E. A. , Estabrooks , W. , Lim , S. R. , & MacIver-Lux , K. ( 2016 ). Strategies for listening, talking, and thinking in auditory-verbal therapy. In Estabrooks , W. , MacIver , K. , & Rhoades , E. A. (Eds.) , Auditory-verbal therapy for young children with hearing loss and their families, and the practitioners who guide them (pp. 285 – 326 ). San Diego, CA : Plural Publishing . Rice , M. L. , Buhr , J. C. , & Nemeth , M. ( 1990 ). Fast mapping word-learning abilities of language-delayed preschoolers . Journal of Speech and Hearing Disorders , 55 , 33 – 42 . doi:10.1044/jshd.5501.33 . Google Scholar CrossRef Search ADS Robbins , A. M. ( 2016 ). Auditory-verbal therapy: A conversational competence approach. In Moeller M. P. , Ertmer D. J. , & Stoel-Gammon C. (Eds.) , Promoting language and literacy in children who are deaf or hard of hearing (pp. 181 – 212 ). Baltimore, MD : Paul H. Brooks Publishing Co . Robertson , V. S. , von Hapsburg , D. , & Hay , J. S. ( 2017 ). The effect of hearing loss on novel word learning in infant-and adult-directed speech . Ear and Hearing , 38 , 701 – 713 . doi:10.1097/AUD.0000000000000455 . Google Scholar CrossRef Search ADS Ross , L. A. , Saint-Amour , D. , Leavitt , V. M. , Javitt , D. C. , & Foxe , J. J. ( 2007 ). Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments . Cerebral Cortex , 17 , 1147 – 1153 . doi:10.1093/cercor/bhl024 . Google Scholar CrossRef Search ADS Rouger , J. , Fraysse , B. , Deguine , O. , & Barone , P. ( 2008 ). McGurk effects in cochlear-implanted deaf subjects . Brain Research , 1188 , 87 – 99 . doi:10.1016/j.brainres.2007.10.049 . Google Scholar CrossRef Search ADS Sacks , C. , Shay , S. , Repplinger , L. , Leffel , K. R. , Sapolich , S. G. , Suskind , E. , & Suskind , D. ( 2013 ). Pilot testing of a parent-directed intervention (Project ASPIRE) for underserved children who are deaf or hard of hearing . Child Language Teaching and Therapy , 30 , 91 – 102 . doi:10.1177/0265659013494873 . Google Scholar CrossRef Search ADS Schorr , E. A. , Fox , N. A. , van Wassenhove , V. , & Knudsen , E. I. ( 2005 ). Auditory-visual fusion in speech perception in children with cochlear implants . Proceedings of the National Academy of Sciences of the United States of America , 102 , 18748 – 18750 . doi:10.1073/pnas.0508862102 . Google Scholar CrossRef Search ADS Schwartz , R. G. , & Leonard , L. B. ( 1984 ). Words, objects, and actions in early lexical acquisition . Journal of Speech, Language, and Hearing Research , 27 , 119 – 127 . doi:10.1044/jshr.2701.119 . Google Scholar CrossRef Search ADS Seitz , A. R. , Kim , R. , & Shams , L. ( 2006 ). Sound facilitates visual learning . Current Biology , 16 , 1422 – 1427 . doi:10.1016/j.cub.2006.05.048 . Google Scholar CrossRef Search ADS Sharma , A. , Campbell , J. , & Cardon , G. ( 2015 ). Developmental and cross-modal plasticity in deafness: Evidence from the P1 and N1 event related potentials in cochlear implanted children . International Journal of Psychophysiology , 95 , 135 – 144 . doi:10.1016/j.ijpsycho.2014.04.007 . Google Scholar CrossRef Search ADS Sharma , A. , Gilley , P. M. , Dorman , M. F. , & Baldwin , R. ( 2007 ). Deprivation-induced cortical reorganization in children with cochlear implants . International Journal of Audiology , 46 , 494 – 499 . doi:10.1080/14992020701524836 . Google Scholar CrossRef Search ADS Sharma , A. , & Glick , H. ( 2016 ). Cross-modal re-organization in clinical populations with hearing loss . Brain Sciences , 6 , 4 . doi:10.3390/brainsci6010004 . Google Scholar CrossRef Search ADS Sheffert , S. M. , Lachs , L. , & Hernandez , L. R. ( 1996 ). The Hoosier audiovisual multitalker database. In Pisoni , D. B. (Ed.) , Research on Spoken Language Processing Progress Report No. 21 (pp. 578 – 583 ). Bloomington, IN : Indiana University . Sindelar , P. T. , Rosenberg , M. S. , & Wilson , R. J. ( 1985 ). An adapted alternating treatments design for instructional research . Education and Treatment of Children , 8 , 67 – 76 . Stacey , P. C. , Raine , C. H. , O’Donoghue , G. M. , Tapper , L. , Twomey , T. , & Summerfield , A. Q. ( 2010 ). Effectiveness of computer-based auditory training for adult users of cochlear implants . International Journal of Audiology , 49 , 347 – 356 . doi:10.3109/14992020903397838 . Google Scholar CrossRef Search ADS Stanford , T. R. , & Stein , B. E. ( 2007 ). Superadditivity in multisensory integration: Putting the computation in context . Neuroreport , 18 , 787 – 792 . doi:10.1097/WNR.0b013e3280c1e315 . Google Scholar CrossRef Search ADS Stevenson , R. A. , Nelms , C. E. , Baum , S. H. , Zurkovsky , L. , Barense , M. D. , Newhouse , P. A. , & Wallace , M. T. ( 2015 ). Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition . Neurobiology of Aging , 36 , 283 – 291 . doi:10.1016/j.neurobiolaging.2014.08.003 . Google Scholar CrossRef Search ADS Stevenson , R. A. , Sheffield , S. W. , Butera , I. M. , Gifford , R. H. , & Wallace , M. T. ( 2017 ). Multisensory Integration in cochlear implant recipients . Ear and Hearing , 38 , 521 – 538 . doi:10.1097/AUD.0000000000000435 . Google Scholar CrossRef Search ADS Stevenson , R. A. , Zemtsov , R. K. , & Wallace , M. T. ( 2012 ). Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions . Journal of Experimental Psychology: Human Perception and Performance , 38 , 1517 – 1529 . doi:10.1037/a0027339 . Google Scholar CrossRef Search ADS Stoel-Gammon , C. ( 2011 ). Relationships between lexical and phonological development in young children . Journal of Child Language , 38 , 1 – 34 . doi:10.1017/S0305000910000425 . Google Scholar CrossRef Search ADS Storkel , H. L. , & Hoover , J. R. ( 2010 ). An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English . Behavior Research Methods , 42 , 497 – 506 . doi:10.3758/BRM.42.2.497 . Google Scholar CrossRef Search ADS Sumby , W. H. , & Pollack , I. ( 1954 ). Visual contribution to speech intelligibility in noise . The Journal of the Acoustical Society of America , 26 , 212 – 215 . doi:10.1121/1.1907309 . Google Scholar CrossRef Search ADS Tomblin , J. B. , Harrison , M. , Ambrose , S. E. , Walker , E. A. , Oleson , J. J. , & Moeller , M. P. ( 2015 ). Language outcomes in young children with mild to severe hearing loss . Ear and Hearing , 36 , 76S – 91S . doi:10.1097/aud.0000000000000219 . Google Scholar CrossRef Search ADS Tye-Murray , N. ( 2014 ). Foundations of aural rehabilitation: Children, adults, and their family members ( 4th ed. ). Samford, CT : Cengage Learning . Walker , E. A. , & McGregor , K. K. ( 2013 ). Word learning processes in children with cochlear implants . Journal of Speech, Language, and Hearing Research , 56 , 375 – 387 . doi:10.1044/1092-4388(2012/11-0343) . Google Scholar CrossRef Search ADS Wallace , M. T. , Perrault , T. J. , Hairston , W. D. , & Stein , B. E. ( 2004 ). Visual experience is necessary for the development of multisensory integration . Journal of Neuroscience , 24 , 9580 – 9584 . doi:10.1152/jn.00497.2006 . Google Scholar CrossRef Search ADS Wallace , M. T. , & Stein , B. E. ( 2007 ). Early experience determines how the senses will interact . Journal of Neurophysiology , 97 , 921 – 926 . doi:10.1152/jn.00497.2006 . Google Scholar CrossRef Search ADS Wechsler-Kashi , D. , Schwartz , R. G. , & Cleary , M. ( 2014 ). Picture naming and verbal fluency in children with cochlear implants . Journal of Speech, Language, and Hearing Research , 57 , 1870 – 1882 . doi:10.1044/2014_JSLHR-L-13-0321 . Google Scholar CrossRef Search ADS Wendel , E. , Cawthon , S. W. , Ge , J. J. , & Beretvas , S. N. ( 2015 ). Alignment of single-case design (SCD) research with individuals who are deaf or hard of hearing with the What Works Clearinghouse standards for SCD research . Journal of Deaf Studies and Deaf Education , 20 , 103 – 114 . doi:10.1093/deafed/enu049 . Google Scholar CrossRef Search ADS What Works Clearinghouse ( 2014 , March). Procedures and standards handbook (version 3.0). Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_procedures_v3_0_standards_handbook.pdf Williams , K. T. ( 2007 ). Expressive vocabulary test, second edition (EVT-2) . Minneapolis, MN : Pearson Assessments . Willstedt-Svensson , U. , Löfqvist , A. , Almqvist , B. , & Sahlén , B. ( 2004 ). Is age at implant the only factor that counts? The influence of working memory on lexical and grammatical development in children with cochlear implants . International Journal of Audiology , 43 , 506 – 515 . doi:10.1080/14992020400050065 . Google Scholar CrossRef Search ADS Wolery , M. , Gast , D. L. , & Ledford , J. R. ( 2014 ). Comparison designs. In Gast D. L. , & Ledford J. R. (Eds.) , Single case research methodology: Applications in special education and behavioral sciences ( 2nd ed. , pp. 297 – 345 ). New York, NY : Routledge . Xu , J. , Yu , L. , Rowland , B. A. , Stanford , T. R. , & Stein , B. E. ( 2012 ). Incorporating cross-modal statistics in the development and maintenance of multisensory integration . Journal of Neuroscience , 32 , 2287 – 2298 . doi:10.1523/jneurosci.4304-11.2012 . Google Scholar CrossRef Search ADS Yu , L. , Rowland , B. A. , & Stein , B. E. ( 2010 ). Initiating the development of multisensory integration by manipulating sensory experience . Journal of Neuroscience , 30 , 4904 – 4913 . doi:10.1523/JNEUROSCI.5575-09.2010 . Google Scholar CrossRef Search ADS Zhang , L. I. , Bao , S. , & Merzenich , M. M. ( 2001 ). Persistent and specific influences of early acoustic environments on primary auditory cortex . Nature Neuroscience , 4 , 1123 – 1130 . doi:10.1038/nn745 . Google Scholar CrossRef Search ADS Zupan , B. , & Sussman , J. E. ( 2009 ). Auditory preferences of young children with and without hearing loss for meaningful auditory–visual compound stimuli . Journal of Communication Disorders , 42 , 381 – 396 . doi:10.1016/j.jcomdis.2009.04.002 . Google Scholar CrossRef Search ADS Appendix A Pseudoword sets for each participant Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Pseudoword sets for each participant Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Auditory-only set Audiovisual set Control set Participant 1 /fif/ /gʌʒ/ /fip/ /gʊʒ/ /mum/ /jʊŋ/ /mub/ /pɑf/ /ʧʊʧ/ /ʧʊk/ /ʧʌʃ/ /wim/ Participant 2 /fip/ /fif/ /gʌʒ/ /jʊŋ/ /gʊʒ/ /mum/ /ʧʊʧ/ /mub/ /pɑf/ /wim/ /ʧʊk/ /ʧʌʃ/ Participant 3 /gʌʒ/ /fif/ /fip/ /mum/ /gʊʒ/ /jʊŋ/ /pɑf/ /mub/ /ʧʊʧ/ /ʧʌʃ/ /ʧʊk/ /wim/ Participant 4 /fip/ /gʌʒ/ /gʊŋ/ /jʊŋ/ /mum/ /mib/ /ʧʊʧ/ /pɑf/ /pif/ /wim/ /ʧʌʃ/ /ʃʊʧ/ Appendix B Example unfamiliar objects Appendix C Example word list for multimodal word recognition task with phonetic transcription 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k Example word list for multimodal word recognition task with phonetic transcription 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k 1. bean b i n 2. bone b o n 3. bug b ʌ g 4. chair ʧ ɛ r 5. cod k ɑ d 6. dame d eɪ m 7. debt d ɛ t 8. fit f ɪ t 9. fool f u l 10. give g ɪ v 11. gut g ʌ t 12. hash h æ ʃ 13. lad l æ d 14. late l eɪ t 15. mouth m aʊ θ 16. nose n o z 17. rain r eɪ n 18. rule r u l 19. seat s i t 20. shape ʃ eɪ p 21. talk t ɔ k 22. vice v aɪ s 23. wife w aɪ f 24. work w ɝ k Appendix D Standardized theme boxes © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The Journal of Deaf Studies and Deaf EducationOxford University Press

Published: May 15, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off