TY - JOUR AU1 - Wilhelm, Lindsey, A AB - Abstract Older adults commonly experience hearing loss that negatively affects the quality of life and creates barriers to effective therapeutic interactions as well as music listening. Music therapists have the potential to address some needs of older adults, but the effectiveness of music interventions is dependent on the perception of spoken and musical stimuli. Nonauditory information, such as contextual (e.g., keywords, picture related to song) and visual cues (e.g., clear view of singer’s face), can improve speech perception. The purpose of this study was to examine the benefit of contextual and visual cues on sung word recognition in the presence of guitar accompaniment. The researcher tested 24 community-dwelling older adult hearing aid (HA) users recruited through a university HA clinic and laboratory under 3 study conditions: (a) auditory stimuli only, (b) auditory stimuli with contextual cues, and (c) auditory stimuli with visual cues. Both visual and contextual nonauditory cues benefited participants on sung word recognition. Participants’ music background and training were predictive of success without nonauditory cues, and visual cues provided greater benefit than contextual cues. Based on the results of this study, it is recommended that music therapists increase the accessibility of music interventions reliant upon lyric recognition through the incorporation of clear visual and contextual cues. Improved access to healthcare enables adults to live longer than previous generations. As the proportion of adults over the age of 65 increases, so does the need for effective and efficient health practices to promote wellness (Rowe et al., 2016). Since the prevalence and severity of hearing loss increase as individuals age (Tye-Murray, 2009), it is important for music therapists working with older adults to understand and make accommodations for the effects of hearing loss. Hearing Loss and Older Adults Age-related hearing loss (presbycusis) results from physiologic changes in the peripheral and central auditory system and a lifetime of exposure to noise; the loss results in declines to physical, cognitive, emotional, and social functioning (Dalton et al., 2003). An individual’s hearing can be discussed as a pure tone average (PTA), which is the average decibel (dB) level, or hearing threshold, that sounds need to reach to be perceived. In general, someone who can perceive sounds at 25 dB or less across the frequencies associated with speech is considered to have no hearing loss. Within the United States, two-thirds of adults over 70 have a hearing loss defined as >25 dB in the better ear (Lin, Thorpe, Gordon-Salant, & Ferrucci, 2011), and the National Institute on Deafness and Other Communication Disorders (NIDCD, 2014) estimates that half of those 75 and older have a disabling hearing loss defined as ≥35 dB. In addition to a significant negative impact on the quality of life, hearing loss creates barriers to effective therapeutic interactions (Chen, Genther, Betz, & Lin, 2014; Ciorba, Bianchini, Pelucchi, & Pastore, 2012; Lin et al., 2013). The use of hearing aids (HAs) is the primary intervention for hearing loss (Lotfi, Mehrkian, Moossavi, & Faghih-Zadeh, 2009). HAs enhance audibility by making sounds louder, but they do not repair auditory system damage. When properly fitted and programmed, HAs improve speech understanding, especially in quiet environments; however, HAs have limitations in complex listening environments (Tye-Murray, 2009). In real-life settings, numerous auditory signals occur simultaneously, making it challenging for HA users to hear and understand speech amid background noise (Tye-Murray, 2009). HA manufacturers use digital programming and directional microphones in an attempt to remedy this challenge (Weinstein, 2012). HAs are designed to amplify the characteristics of speech but are not designed to do the same with music, which consists of a wider range of frequencies and intensities (Chasin & Hockley, 2014). HA users have reported difficulty understanding lyrics and recognizing musical melodies (Leek, Molis, Kubli, & Tufts, 2008; Rutledge, 2009). In addition to hearing loss, older adults commonly experience declines in vision and areas of cognition that may compound difficulties with communication. Vision is important for speech perception (Tye-Murray, 2009), aiding in locating sound sources and watching a speaker’s mouth (i.e., speechreading). Cognitive processes help individuals make sense of and assign value to available sensory input (Correia, Barroso, & Nieto, 2018). When deficits in visual and cognitive processes are present, any hearing deficit will be magnified (Beck & Clark, 2009). In order to better understand how to help older adults with HAs improve comprehension of speech, it is important to first have a basic understanding of the complexities of auditory processing. Auditory scene analysis, first proposed by Bergman (1990), may be helpful in understanding how adults with hearing loss make sense of acoustic stimuli through the interaction of bottom–up and top–down processing (Alain, Dyson, & Snyder, 2006). Perception of an acoustic signal starts with bottom–up processing through the extraction of the stimulus. Bottom–up input may come from a variety of sources, including auditory, visual, and tactile input. Top–down processing occurs simultaneously, based on the individual’s previous experiences and acoustic environment (Alain et al., 2006). A listener with hearing loss, or a deficit in bottom-up processing, will likely rely more heavily on visual cues, knowledge of the music, or contextual cues (examples of top–down processing) to make sense of the degraded auditory input (Rutledge, 2009). Since the perception of any auditory stimuli, including music, will include bottom–up and top–down processing, understanding how auditory processing occurs in older adults with HAs is helpful for those designing music-based interventions for older adults. Older Adults and Music Engagement in music has been associated with positive health outcomes for older adults (Cohen et al., 2006; Creech, Hallmas, McQueen, & Varvarigou, 2013; Laukka, 2007); however, even while using HAs, an individual may be unable to hear the music, or the perceived quality of the music may be diminished (Chasin & Hockley, 2014). Researchers have shown that both auditory and nonauditory factors impact sung word intelligibility and recognition (Heinrich, Knight, & Hawkins, 2015; Jesse & Massaro, 2010). Specifically, factors that are performer-related (e.g., auditory factors of diction, voice quality), listener-related (e.g., nonauditory factors of attention, hearing ability), environment-related (e.g., auditory and nonauditory factors of distractions, visibility of singer), and music- or word-related (e.g., auditory factors of text setting, word repetition) have been shown to have an impact on sung word intelligibility (Fine & Ginsborg, 2014). Success on lyric intelligibility tasks may decrease with age, while previous music training, presence of a single vocalist (as opposed to more than one vocalist), and text that is highly predictable may increase intelligibility. In addition, hearing sung material two or three additional times may improve success in understanding the lyrics (Condit-Schultz & Huron, 2015; Ginsborg, Fine, & Barlow, 2011). These findings are important to music therapists working with older adults, as it may be possible to improve lyric intelligibility in clinical settings. In the most recent Workforce Analysis Survey of music therapists in the United States, 16% of respondents reported working with seniors and 16% responded working in geriatric facilities (American Music Therapy Association, 2018). Music therapists use a variety of music interventions when working with older adults (Belgrave, Darrow, Walworth, & Wlodarczyk, 2011), including interventions that are reliant upon the perception of sung lyrics (e.g., life review with music/music-cued reminiscence, songwriting, and therapeutic singing interventions). The effectiveness of those music therapy interventions will depend on how effectively the client can hear the words and music. Based on the prevalence of hearing loss in older adults and the percentage of music therapists who work with older adults, many music therapy clients will likely have some degree of hearing loss, potentially rendering music an ineffective therapeutic tool. In spite of the frequency with which music therapists work with older adults who may have hearing loss, few resources exist for music therapists regarding the effect of age-related hearing loss on the practice of music therapy. Authors have cautioned against music presented at a loud volume (Clair & Memmott, 2008; Prickett, 2000). More recently, Gfeller, Darrow, and Wilhelm (2018) provided four strategies for improving the accessibility of interventions for persons with hearing loss in music therapy sessions: (a) create an effective listening environment, (b) encourage effective and appropriate use of assistive devices, (c)use clear communication, and (d) select and deliver music stimuli that are accessible. Yinger (2014) also made several recommendations for working with older adults with hearing loss in a choral setting that can be grouped into strategies to improve bottom–up and top–down processing. To improve bottom–up processing, participants should use their HAs, and the director can increase proximity to participants, speak clearly in a lower tone, and eliminate extraneous noise. To enhance top–down cognitive processing, the director can use nonverbal cues, provide context, and speak without covering one’s face. Although previous researchers have offered some recommendations to improve music perception for individuals with hearing loss, these recommendations largely come from secondary sources and anecdotal evidence. The effectiveness of these recommendations has not been investigated. Currently, no clear evidence exists regarding how individuals with HAs perform on lyric recognition tasks and what benefit may be derived from nonauditory cues. The purpose of this initial exploratory study was to evaluate the benefit of nonauditory cues for older adult HA users on a sung-sentence recognition task. Specifically, the following research questions were addressed: Do older adult HA users derive benefit from nonauditory cues (contextual, visual) on a sung-sentence recognition task? Does the degree of benefit received from nonauditory cues vary based on the type of nonauditory cues (visual or contextual)? Do auditory threshold, age, lipreading ability, and music training predict performance on a sung-sentence recognition task? Do auditory threshold, age, lipreading ability, and music training predict performance on a sung-sentence recognition task with the addition of contextual cues? Do auditory threshold, age, lipreading ability, and music training predict performance on a sung-sentence recognition task with the addition of visual cues? Methods Participants Participants were community-dwelling older adult, bilateral HA users (N = 24) in a Midwestern city who were recruited through a university audiology research laboratory and clinic in 2016. The researcher identified potential participants through flyers posted in the laboratory and clinic as well as by word of mouth. Seventeen participants were female and seven were male; they ranged from 60 to 79 years of age (M = 69 years; SD = 4.93). Participants provided informed consent before being screened for eligibility using the following criteria: 60 to 80 years old; downward sloping, bilateral sensorineural hearing loss (PTA between 25- and 65-decibels in hearing level (dB HL) from .5 to 4 kHz, hearing symmetry within 15 dB); “well-fit” HA(s) with continuous use for at least three months; native English speaker; no cognitive impairment (as rated by a score ≥ 26 on the Montreal Cognitive Assessment1; Nasreddine et al., 2005); and normal or corrected-to-normal visual acuity. Screening for a specific hearing loss, cognitive impairment, and visual acuity was necessary because each of these factors has been shown to impact speech perception and, therefore, potentially lyric recognition. All recruited participants met the screening criteria to participate in this study. Test Materials Measure of Sung-Sentence Recognition Because no preexisting sung lyric recognition test was identified that included nonauditory cues, it was necessary to develop one for this exploratory study. Sung-Sentence Recognition Test Development A more detailed description of test development is available online in Supplementary File 1. The sung-sentence recognition test (SSRT) was created to measure sung-sentence recognition presented with and without nonauditory information, using sentences taken from the Connected Speech Test (CST; Cox, Alexander, & Gilmore, 1987; Cox, Alexander, Gilmore, & Pusakulich, 1988, 1989). The CST is a standardized measure of speech recognition using pairs of 10-sentence passages presented in noise. Scores are calculated based on the correct repetition of keywords. Passage pairs are of equal intelligibility, and reliability is increased when two test pairs are used for comparison conditions, with within-subject standard deviation for HA users reported as 8.0 rationalized arcsine units (RAU; Studebaker, 1985), with a 95% critical difference of 15.5 RAU (Cox et al., 1988). The researcher selected the CST for this study because the recognition of connected sentences in noise is similar to the task of understanding lyrics with musical accompaniment. One practice passage pair and six test passage pairs were selected for use in the SSRT (see online Supplementary File 2). Based on the measured acoustical properties of each spoken sentence, the researcher composed melodic analogs (C major, 120 bpm, interval of a perfect fifth) that were evaluated as sounding like music and reflected a wider pitch range, larger stepwise movement, and longer duration than the CST sentences. An example of this process is available online in Supplementary Figure 1. In total, the researcher created 140 melodies for the SSRT (e.g., see online Supplementary Figure 2) that were recorded unaccompanied by a female vocalist (alto vocal range) using a clear singing tone with minimal vibrato. In addition, the researcher used an iPhone 6 Plus to create color video recordings of the melodic analogs in front of a neutral background with adequate lighting. Because music therapists commonly accompany singing with guitar (Krout, 2007), participants heard each melodic analog with one of three guitar strumming patterns (GarageBand loops—strummed acoustic 01, 02, 03). The researcher assigned accompaniment patterns using block randomization to ensure each loop was presented a similar number of times and to reduce predictability. To adjust the signal-to-accompaniment ratio (SAR), the researcher first calibrated the melody and guitar accompaniment of each analog to the same amplitude, then increased or decreased the level of the guitar accompaniment loop, thereby changing the perceived interference caused by the accompaniment. SSRT Final Test Structure The SSRT included three presentation conditions, each consisting of two passage pairs (20 sentences): (a) sung sentence presented with guitar accompaniment (SU-G); (b) a contextual cue representing the topic of the upcoming passage (e.g., picture of a lake with the title “lake”) presented prior to the sung sentence with guitar accompaniment (SU-G+C); and (c) video of the singer singing the sentence with guitar accompaniment (SU-G+V). Through pilot testing with adults with normal to near-normal hearing, the researcher confirmed that test materials were clearly presented and that the task could provide an appropriate challenge; thus, it was deemed feasible to use the SSRT. In addition, pilot testing revealed innate differences in difficulty among the test passages independent of listening condition. The researcher accounted for these innate differences in the final SSRT structure by administering each test passage pair twice and measuring the amount of change between the first and second listening with and without nonauditory cues (within-individual differences). Figure 1 depicts the final structure of the SSRT. To ensure that participants fully understood the testing instructions, test administrators presented them visually and orally. The researcher calibrated the SSRT stimuli to approximate an average of 65 dB Sound Pressure Level (C-weighting). The test administrator recorded participants’ responses using a pencil/paper form. Listening condition scores could range from 0 to 100, with higher scores indicating a greater number of words recognized. Final SSRT scores were calculated by subtracting the first listening score from the second in each presentation condition (within-individual difference). Figure 1 Open in new tabDownload slide Structure of the sung-sentence recognition test. Note. SU-G = guitar accompaniment; SU-G+C = guitar accompaniment with contextual cues; SU-G+V = guitar accompaniment with visual cues. Figure 1 Open in new tabDownload slide Structure of the sung-sentence recognition test. Note. SU-G = guitar accompaniment; SU-G+C = guitar accompaniment with contextual cues; SU-G+V = guitar accompaniment with visual cues. Measure of Lipreading Ability Form A of the Utley test (DiCarlo & Kataja, 1951; Utley, 1946) was used to measure innate lipreading ability. To administer the Utley test, the researcher used a video recording (without sound) of the face of the same female vocalist who recorded the SSRT stimuli saying the phrases from the Utley test. The number of words recognized out of a maximum score of 125 determined lipreading scores. The following distributions of scores have been reported for individuals with hearing loss: range = 0–84; M = 33.60; SD = 16.40 (Utley, 1946). Measure of Music Training The Music Training and Involvement Questionnaire (MTIQ; adapted from Gfeller et al., 2000) measured previous music training. Participants answered five questions extracted from the original questionnaire, covering involvement in instrumental and vocal performance ensembles, music appreciation and theory classes, and music lessons. The original tool, created by Gfeller and colleagues, collected additional information related to music listening habits and esthetic enjoyment which were not needed to answer the research questions in this study. The range of scores on the MTIQ was 0–48, with higher scores indicating a greater degree of music training and background. Design This study used a single-group repeated measures design. Predictor variables identified as impacting speech perception in background noise included auditory threshold (Tye-Murray, 2009), age (Ginsburg et al., 2011; Heinrich et al., 2015), innate lipreading ability (Tye-Murray, 2009), and music training (Parbery-Clark, Skoe, & Kraus, 2009). Researcher-manipulated variables included two types of nonauditory cues: contextual and visual. The outcome variable was word recognition of sung-sentences. Sample Size and Power Analysis The researcher performed a power analysis to determine adequate sample size using G*Power 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007) based on the smallest differences between SSRT conditions in pilot testing. As the primary aim of this study was to determine whether there was a difference in word-recognition success given the type of cue provided (none, contextual, visual), the necessary sample size was calculated based on the change in SSRT condition scores. Pilot data from normal-hearing adults were used for the power analysis because of anticipated difficulty in accessing a large enough sample of persons with HAs with the desired audiological profile for both pilot and final testing. Data from normal-hearing adults (N = 18) yielded an estimated mean difference between SU-G and SU-G+C to be 3.83 with a standard deviation of 6.18 (effect size = 0.62). Using a two-tailed, paired t-test to compare sentence recognition between the two conditions with an alpha of .05, a sample of 23 participants would achieve 81% power to detect a statistically significant difference between means. In total, 24 participants were recruited to achieve the minimum sample size of 23 determined through the power analysis while allowing for possible attrition or measurement error. Procedure A university Institutional Review Board approved all study procedures. During a single visit lasting approximately two hours, the researcher or a trained research assistant acquired informed consent, then administered screening measures, a demographic information questionnaire, the SSRT, the Utley test, and the MBIQ. A clinical audiologist and two graduate audiology students served as research assistants. For the SSRT, the researcher randomly assigned each participant to a presentation condition order for the first and second listening opportunities (see Figure 1). Between the first and second presentation of the test passages, participants took a short break to prevent testing fatigue. Because of how the SSRT was administered, it was not possible to mask the presentation condition order from participants or researchers. The SSRT and Utley test stimuli were presented on a 19″ computer monitor positioned at eye level one meter from the participant. Auditory stimuli were presented using a PC and presented on Grason-Stadler speakers in a sound-treated booth. During testing, all participants used the same HA settings they typically used when listening to music. Because individuals vary greatly in their success when listening in noise (Tye-Murray, 2009), the test administrator presented 5–7 practice sentences to determine the SAR test level (−7, −5, −2, 0, or +2) that approximated a 50% performance level. This SAR test level was used for the entire SSRT. The Utley test and MTIQ followed the SSRT. Data Analysis The researcher analyzed the data using IBM SPSS Statistics (version 22) and compared measures of central tendency using scatter plots to look for overall trends and possible outliers for all independent and dependent variables. The researcher treated PTA and age as continuous variables and administered the SAR level as categorical. Both the MTIQ and Utley tests provided discrete scores. One score was excluded from analysis because the Utley test was administered to the participant outside the sound booth leaving 23 scores for analyses. Because the SSRT was based on an existing measure, methods specific to analyzing the CST (Cox et al., 1988) guided analysis of SSRT data. The researcher converted raw scores to RAU to linearize the variable and provide a constant variance, thus permitting the use of linear analysis methods. The researcher then calculated the difference in word recognition between the first and second listening opportunities for each presentation condition. Missing data from one participant for the SU-G+V condition left 23 scores available for SU-G+V analysis. Preliminary Data Screening The researcher conducted preliminary data screening to detect outliers and to test the assumptions for paired t-tests and multiple regression analyses. The distributed differences between the changes in SSRT scores were examined for normality using histograms and scatter plots and confirmed through a Kolmogorov–Smirnov (K–S) test. Assumptions of multiple regressions include variable type and variance, linearity, absence of multicollinearity, homoscedasticity, independence, and independent and normally distributed errors (Kleinbaum, Kupper, Nizam, & Muller, 2008). All variables were identified as quantitative or categorical, and the variance of each was examined using measures of central tendency and scatter plots to check for outliers. Linearity and homoscedasticity were also evaluated using scatter plots. To assess the potential for multicollinearity, the researcher examined a correlation matrix and collinearity diagnostics for the predictor variables. A separate examination of regression analyses for each presentation condition eliminated the need for a mixed model to examine within-subject outcomes (measurements were independent). Scatter plots and the Durbin–Watson test evaluated the distribution of errors (presence of autocorrelation). The researcher identified that it was reasonable to conclude that the assumptions necessary for paired t-tests and multiple regression analyses had been met. Statistical Analyses to Examine Research Questions The researcher used a series of paired t-tests (p < .05) to compare within-subject differences in word recognition to determine whether nonauditory information resulted in a significant difference in the change scores across presentation conditions (research questions 1 and 2). Correlations were used to examine potential relationships among the study variables. Due to the exploratory nature of this study, the researcher chose to define the threshold of significance as p = .10 for the correlation analyses (Johnson, Huron, & Collister, 2014; Schumm, Pratt, Hartenstein, Jenkins, & Johnson, 2013). PTA, age, and measured Utley test and MTIQ scores were applied to three multiple-regression analyses to explore their relationships with the SSRT, given the presentation condition (research questions 3–5). In addition to calculating R2, the researcher also calculated the Predicted R2 for each model. The Predicted R2 indicates how well a model predicts response for a new observation. A Predicted R2 that is much lower than R2 indicates overfitting of the model. With results from 24 participants available for analysis, the researcher anticipated that while any prediction model may provide limited statistically significant outcomes, it could provide meaningful results by furthering understanding of the relationships between the identified predictor and dependent variables. All independent variables were entered into each regression model simultaneously and significance was evaluated using t-tests. Individualized data for each study measure are available online in Supplementary Tables 1 and 2. Results Preliminary Analyses Pure-Tone Thresholds Participants’ average PTA ranged from 32.5 to 63.75 dB HL from 0.5 to 4 kHz. Associated mean thresholds and standard deviations for the right and left ear, respectively, across 0.5, 1, 2, and 4 kHz were as follows: 34.97 (11.75)/35.42 (12.06), 43.33 (14.27)/42.92 (14.96), 52.08 (11.41)/52.08 (13.18), and 57.5 (10.00)/59.17 (9.96). Music Training and Involvement Table 1 presents self-reported music training and involvement during and following primary, secondary, and post-secondary education (elementary, middle/high, college). All but one participant reported some involvement in music groups, classes, or lessons during their life; the remaining participant agreed with the statement “I can sing.” One participant did not attend college, which is why the total number of participants for that category was 23. Seven participants indicated that they were formally trained; 11 reported not being self-taught or formally trained. Total scores across participants on the MTIQ ranged from 1 to 31 (M = 8.25; SD = 7.57). Table 1 Participants’ Reporting of Participation and Involvement in Music Groups, Lessons, and Classes in Elementary School, Middle/High School, College, and Post-College . 1–3 years . 4–6 years . 7–8 years . 10+ years . None . Elementary School (N = 24)  Groups 7 6 0 11  Lessons 10 5 0 9  Classes 0 4 2 18 Middle/High School (N = 24)  Groups 8 8 1 7  Lessons 7 6 0 11  Classes 1 2 0 21 College (N = 23)  Groups 2 1 20  Lessons 2 1 20  Classes 6 1 16 Post-College/School (N = 24)  Groups 0 1 0 4 19  Lessons 3 0 0 2 19  Classes 3 0 0 1 20 . 1–3 years . 4–6 years . 7–8 years . 10+ years . None . Elementary School (N = 24)  Groups 7 6 0 11  Lessons 10 5 0 9  Classes 0 4 2 18 Middle/High School (N = 24)  Groups 8 8 1 7  Lessons 7 6 0 11  Classes 1 2 0 21 College (N = 23)  Groups 2 1 20  Lessons 2 1 20  Classes 6 1 16 Post-College/School (N = 24)  Groups 0 1 0 4 19  Lessons 3 0 0 2 19  Classes 3 0 0 1 20 Open in new tab Table 1 Participants’ Reporting of Participation and Involvement in Music Groups, Lessons, and Classes in Elementary School, Middle/High School, College, and Post-College . 1–3 years . 4–6 years . 7–8 years . 10+ years . None . Elementary School (N = 24)  Groups 7 6 0 11  Lessons 10 5 0 9  Classes 0 4 2 18 Middle/High School (N = 24)  Groups 8 8 1 7  Lessons 7 6 0 11  Classes 1 2 0 21 College (N = 23)  Groups 2 1 20  Lessons 2 1 20  Classes 6 1 16 Post-College/School (N = 24)  Groups 0 1 0 4 19  Lessons 3 0 0 2 19  Classes 3 0 0 1 20 . 1–3 years . 4–6 years . 7–8 years . 10+ years . None . Elementary School (N = 24)  Groups 7 6 0 11  Lessons 10 5 0 9  Classes 0 4 2 18 Middle/High School (N = 24)  Groups 8 8 1 7  Lessons 7 6 0 11  Classes 1 2 0 21 College (N = 23)  Groups 2 1 20  Lessons 2 1 20  Classes 6 1 16 Post-College/School (N = 24)  Groups 0 1 0 4 19  Lessons 3 0 0 2 19  Classes 3 0 0 1 20 Open in new tab Utley Test The final data analysis of the lipreading scores for the 23 participants showed a range of 8–81 identified words (M = 38.87; Mdn = 39; Mo = 30) out of a possible total of 125. Signal-Accompaniment Ratio The SAR level used to test each participant was based on an estimate of 50%-word recognition success level as described previously. The final distribution of SAR testing levels across the 24 participants was as follows: 10 participants at −7; 5 at −5, 4 at −2; 2 at 0; and 3 at +2. Sung-Sentence Recognition Test Refer to Table 2 for the descriptive statistics of measured change given presentation condition on the SSRT. Each participant’s scores changed between the first and second listening opportunities in each presentation condition. The measured change in the SU-G condition represents the learning effect present without the addition of nonauditory information (improvement between the first and second listening opportunities). Changes in the SU-G condition, D(24) = 0.092, p = .200; SU-G+C condition, D(24) = 0.092, p = .200; and SUG+V condition, (23) = 0.125, p = .200 were approximately normally distributed. Table 2 Descriptive Statistics of Measured Change Given Presentation Condition of SSRT (N = 24) Condition . M . SD . Min . Max . SU-G 11.31 6.14 1.02 22.23 SU-G+C 14.93 17.11 0.96 29.87 SU-G+Va 31.80 11.36 10.90 60.23 Condition . M . SD . Min . Max . SU-G 11.31 6.14 1.02 22.23 SU-G+C 14.93 17.11 0.96 29.87 SU-G+Va 31.80 11.36 10.90 60.23 Note. Scores calculated based on the difference between first listening (SU-G) and second listening in the number of works recognized out of 100. SSRT = Sung-Sentence Recognition Test; SU-G = guitar accompaniment; SU-G+C = guitar accompaniment with contextual cues; SU-G+V = guitar accompaniment with visual cues. aDue to missing data, calculations are based on n = 23. Open in new tab Table 2 Descriptive Statistics of Measured Change Given Presentation Condition of SSRT (N = 24) Condition . M . SD . Min . Max . SU-G 11.31 6.14 1.02 22.23 SU-G+C 14.93 17.11 0.96 29.87 SU-G+Va 31.80 11.36 10.90 60.23 Condition . M . SD . Min . Max . SU-G 11.31 6.14 1.02 22.23 SU-G+C 14.93 17.11 0.96 29.87 SU-G+Va 31.80 11.36 10.90 60.23 Note. Scores calculated based on the difference between first listening (SU-G) and second listening in the number of works recognized out of 100. SSRT = Sung-Sentence Recognition Test; SU-G = guitar accompaniment; SU-G+C = guitar accompaniment with contextual cues; SU-G+V = guitar accompaniment with visual cues. aDue to missing data, calculations are based on n = 23. Open in new tab Correlations The researcher examined both nonparametric Spearman and parametric Pearson correlations for each of the study variables. These tests yielded the same results; therefore, only the Pearson correlation results are reported in this section. Table 3 provides the bivariate correlations among the five independent variables (I) and three dependent variables (D). Based upon the inspection of scatter plots, the researcher deemed it reasonable to treat SAR as a linear continuous variable. SAR was included as a potential predictor variable at this stage of the analysis because it was possible that the experienced level of difficulty may have differed across participants and thus influenced outcomes on the SSRT. Based on the exploratory nature of this experiment, the researcher examined the significance level associated with each correlation based on three levels of significance (p <.1, .05, and .01). The data analysis revealed one statistically significant relationship at the .01 level between administered SAR level of the SSRT and PTA (r = .802, p <.001). Considering the other predictor variables with the presentation conditions of the SU-G, the correlation between MTIQ and SU-G (r = .397, p = .055) was the second-strongest correlation, followed by the correlation between SU-G+C and SU-G+V (r = .372, p = .080). Table 3 Pearson Correlations for All Predictor Variables and SSRT Presentation Conditions Variable . . I-1 . I-2 . I-3 . I-4 . I-5 . D-1 . D-2 . D-3 . I-1. Age – I-2. MTIQ −.282 — I-3. Utley test .024 −.154 — I-4. PTA −.074 .073 .114 — I-5. SAR .060 .134 .154 .802*** — D-1. SU-G .119 .397* .048 −.121 .067 — D-2. SU-G+C .160 .239 −.006 .011 .224 .225 — D-3. SU-G+V −.137 −.199 .211 .176 .166 .029 .372* — Variable . . I-1 . I-2 . I-3 . I-4 . I-5 . D-1 . D-2 . D-3 . I-1. Age – I-2. MTIQ −.282 — I-3. Utley test .024 −.154 — I-4. PTA −.074 .073 .114 — I-5. SAR .060 .134 .154 .802*** — D-1. SU-G .119 .397* .048 −.121 .067 — D-2. SU-G+C .160 .239 −.006 .011 .224 .225 — D-3. SU-G+V −.137 −.199 .211 .176 .166 .029 .372* — Note. SSRT = Sung-Sentence Recognition Test; I = independent variable; D = dependent variable; MTIQ = music training and involvement questionnaire; PTA = pure-tone average; SAR = signal-to-accompaniment ratio; SU-G = guitar accompaniment; SU-G+C = guitar accompaniment with contextual cues; SU-G+V = guitar accompaniment with visual cues. *p < .10, two-tailed. ***p < .01, two-tailed. Open in new tab Table 3 Pearson Correlations for All Predictor Variables and SSRT Presentation Conditions Variable . . I-1 . I-2 . I-3 . I-4 . I-5 . D-1 . D-2 . D-3 . I-1. Age – I-2. MTIQ −.282 — I-3. Utley test .024 −.154 — I-4. PTA −.074 .073 .114 — I-5. SAR .060 .134 .154 .802*** — D-1. SU-G .119 .397* .048 −.121 .067 — D-2. SU-G+C .160 .239 −.006 .011 .224 .225 — D-3. SU-G+V −.137 −.199 .211 .176 .166 .029 .372* — Variable . . I-1 . I-2 . I-3 . I-4 . I-5 . D-1 . D-2 . D-3 . I-1. Age – I-2. MTIQ −.282 — I-3. Utley test .024 −.154 — I-4. PTA −.074 .073 .114 — I-5. SAR .060 .134 .154 .802*** — D-1. SU-G .119 .397* .048 −.121 .067 — D-2. SU-G+C .160 .239 −.006 .011 .224 .225 — D-3. SU-G+V −.137 −.199 .211 .176 .166 .029 .372* — Note. SSRT = Sung-Sentence Recognition Test; I = independent variable; D = dependent variable; MTIQ = music training and involvement questionnaire; PTA = pure-tone average; SAR = signal-to-accompaniment ratio; SU-G = guitar accompaniment; SU-G+C = guitar accompaniment with contextual cues; SU-G+V = guitar accompaniment with visual cues. *p < .10, two-tailed. ***p < .01, two-tailed. Open in new tab Correlations among the SAR level and the SSRT presentation conditions ranged from r = .067 to .224. Table 4 presents the results of the repeated-measures ANOVA to analyze whether the SAR level had impacted SSRT scores to determine if SAR should be added to the a priori predictive models. The results presented in Table 4 include the Huynh–Feldt correction, even though the observed Mauchly test statistic (W = 0.782, p = .086) indicated that the .05 level of significance did not significantly violate the condition of sphericity. The researcher had expected the observed statistically significant difference (p < .001) among the presentation conditions on the SSRT and analyzed it further in later analyses. The multivariate test of Presentation * SAR (F = 0.138, p = .831) did not reach conventional levels of statistical significance. Because the correlations among the SAR and SSRT presentation conditions (Table 4) were minimal, and because the Presentation * SAR comparison had no significant effect (the tested level of SAR did not produce a significant difference in SSRT scores), the researcher did not include SAR as a predictor variable for later analyses. Table 4 Within-Subject SAR Effects for SU-G, SU-G+C, and SU-G+V Presentation Conditions Comparisons . SS . df . MS . F . p . Presentation 2,291.449 1.848 1,239.679 19.052 <.001 Presentation * SAR 16.646 1.848 10.134 0.138 .831 Error (Presentation) 2,525.749 38.817 65.068 Comparisons . SS . df . MS . F . p . Presentation 2,291.449 1.848 1,239.679 19.052 <.001 Presentation * SAR 16.646 1.848 10.134 0.138 .831 Error (Presentation) 2,525.749 38.817 65.068 Note. SAR = signal-to-accompaniment ratio; SU-G = guitar accompaniment; SU-G+C = guitar accompaniment and contextual cues; SU-G+V = guitar accompaniment and visual cues. Open in new tab Table 4 Within-Subject SAR Effects for SU-G, SU-G+C, and SU-G+V Presentation Conditions Comparisons . SS . df . MS . F . p . Presentation 2,291.449 1.848 1,239.679 19.052 <.001 Presentation * SAR 16.646 1.848 10.134 0.138 .831 Error (Presentation) 2,525.749 38.817 65.068 Comparisons . SS . df . MS . F . p . Presentation 2,291.449 1.848 1,239.679 19.052 <.001 Presentation * SAR 16.646 1.848 10.134 0.138 .831 Error (Presentation) 2,525.749 38.817 65.068 Note. SAR = signal-to-accompaniment ratio; SU-G = guitar accompaniment; SU-G+C = guitar accompaniment and contextual cues; SU-G+V = guitar accompaniment and visual cues. Open in new tab Research Questions Research Question 1: Do Older Adult HA Users Derive Benefit From Non-Auditory Cues on a Sung-Sentence Recognition Task? The researcher compared measured change scores for the SU-G and SU-G+C conditions and the SU-G and SU-G+V to investigate whether nonauditory cues improved recognition (greater than learning effect). Table 5 shows the resultant t and p values. On average, participants scored significantly better on the SSRT with contextual cues (M = 14.93, SE = 1.45) than the SU-G condition (M = 11.31, SE = 1.25), t(23) = −2.14, p = .043, d = −0.437. When provided with contextual cues, word recognition improved by just under 0.50 standard deviation. Similarly, participants scored significantly better with visual cues (M = 31.80, SE = 2.37) than in the SU-G condition (M = 11.31, SE = 1.25), t(22) = −7.61, p < .001, d = −1.587. When provided with visual cues, word recognition improved by 1.50 standard deviations indicating that recognition of sung sentences improved by the presence of nonauditory cues. Table 5 Paired t-Tests Comparing Measured Change Among Presentation Conditions (N = 24) Comparison . t-value . df . p . d . SU-G × SU-G+C −2.139 23 .043 −0.437 SU-G × SU-G+Va −7.610 22 <.001 −1.587 SU-G+C × SU-G+Va −7.373 22 <.001 −1.537 Comparison . t-value . df . p . d . SU-G × SU-G+C −2.139 23 .043 −0.437 SU-G × SU-G+Va −7.610 22 <.001 −1.587 SU-G+C × SU-G+Va −7.373 22 <.001 −1.537 Note. SU-G = guitar accompaniment; SU-G+C = guitar accompaniment and contextual cues; SU-G+V = guitar accompaniment with visual cues. aDue to missing data, calculations are based on n = 23. Open in new tab Table 5 Paired t-Tests Comparing Measured Change Among Presentation Conditions (N = 24) Comparison . t-value . df . p . d . SU-G × SU-G+C −2.139 23 .043 −0.437 SU-G × SU-G+Va −7.610 22 <.001 −1.587 SU-G+C × SU-G+Va −7.373 22 <.001 −1.537 Comparison . t-value . df . p . d . SU-G × SU-G+C −2.139 23 .043 −0.437 SU-G × SU-G+Va −7.610 22 <.001 −1.587 SU-G+C × SU-G+Va −7.373 22 <.001 −1.537 Note. SU-G = guitar accompaniment; SU-G+C = guitar accompaniment and contextual cues; SU-G+V = guitar accompaniment with visual cues. aDue to missing data, calculations are based on n = 23. Open in new tab Research Question 2: Does the Degree of Benefit Received From Non-Auditory Cues Vary Based on the Type of Non-Auditory Cue? To evaluate whether contextual and visual nonauditory cues provided different levels of benefit, the researcher used a paired t-test analysis to compare the measured change scores. Table 5 shows the resultant t and p values. Participants scored significantly better in the SU-G+V than the SU-G+C condition, t(22) = −7.37, p < .001, d = −1.537. When provided with visual cues, word recognition improved by 1.50 standard deviations compared with listening with contextual cues. Based on these results (p < .001), it is appropriate to conclude that there was a difference in the recognition scores of sung sentences given visual or contextual cues with increased benefit observed with visual cues. Research Question 3: Do Auditory Threshold, Age, Lipreading Ability, and Music Training Predict Performance on a Sung-Sentence Recognition Task? Multiple linear regression analysis was conducted to evaluate all predictor variables together by entering each variable into the model to determine how much variance in the SU-G condition change scores could be explained by the combination of these four independent variables. Table 6 shows the final model with regression coefficients. The combined findings of the regression analysis revealed a nonsignificant statistical relationship between the independent variables and the SU-G change scores, F(4, 18) = 1.548, p = .231, R2 = .256, Predicted R2 = −.17. Table 6 Summary of Multiple Regression Analysis for Each Listening Condition . Unstandardized Coefficients . . . . . B . SE B . t-value . p . SU-G (N = 23)  PTA −0.120 0.147 −0.816 .425  Utley test 0.040 0.060 0.670 .511  Age 0.301 0.265 1.139 .270  MTIQ 0.407 0.175 2.330 .032** SU-G+C (N = 23)  PTA 0.002 0.186 0.012 .991  Utley test 0.012 0.076 0.159 .876  Age 0.358 0.334 1.069 .299  MTIQ 0.295 0.221 1.338 .198 SU-G+V (N = 22)  PTA 0.296 0.299 0.988 .337  Utley test 0.074 0.120 0.620 .543  Age −0.414 0.524 −0.791 .440  MTIQ −0.384 0.351 −1.092 .290 . Unstandardized Coefficients . . . . . B . SE B . t-value . p . SU-G (N = 23)  PTA −0.120 0.147 −0.816 .425  Utley test 0.040 0.060 0.670 .511  Age 0.301 0.265 1.139 .270  MTIQ 0.407 0.175 2.330 .032** SU-G+C (N = 23)  PTA 0.002 0.186 0.012 .991  Utley test 0.012 0.076 0.159 .876  Age 0.358 0.334 1.069 .299  MTIQ 0.295 0.221 1.338 .198 SU-G+V (N = 22)  PTA 0.296 0.299 0.988 .337  Utley test 0.074 0.120 0.620 .543  Age −0.414 0.524 −0.791 .440  MTIQ −0.384 0.351 −1.092 .290 Note. SU-G = guitar accompaniment; SU-G+C = guitar accompaniment and contextual cues; SU-G+V = guitar accompaniment with visual cues; PTA = pure-tone average; MTIQ = music training and involvement questionnaire. **p < .05, two-tailed. Open in new tab Table 6 Summary of Multiple Regression Analysis for Each Listening Condition . Unstandardized Coefficients . . . . . B . SE B . t-value . p . SU-G (N = 23)  PTA −0.120 0.147 −0.816 .425  Utley test 0.040 0.060 0.670 .511  Age 0.301 0.265 1.139 .270  MTIQ 0.407 0.175 2.330 .032** SU-G+C (N = 23)  PTA 0.002 0.186 0.012 .991  Utley test 0.012 0.076 0.159 .876  Age 0.358 0.334 1.069 .299  MTIQ 0.295 0.221 1.338 .198 SU-G+V (N = 22)  PTA 0.296 0.299 0.988 .337  Utley test 0.074 0.120 0.620 .543  Age −0.414 0.524 −0.791 .440  MTIQ −0.384 0.351 −1.092 .290 . Unstandardized Coefficients . . . . . B . SE B . t-value . p . SU-G (N = 23)  PTA −0.120 0.147 −0.816 .425  Utley test 0.040 0.060 0.670 .511  Age 0.301 0.265 1.139 .270  MTIQ 0.407 0.175 2.330 .032** SU-G+C (N = 23)  PTA 0.002 0.186 0.012 .991  Utley test 0.012 0.076 0.159 .876  Age 0.358 0.334 1.069 .299  MTIQ 0.295 0.221 1.338 .198 SU-G+V (N = 22)  PTA 0.296 0.299 0.988 .337  Utley test 0.074 0.120 0.620 .543  Age −0.414 0.524 −0.791 .440  MTIQ −0.384 0.351 −1.092 .290 Note. SU-G = guitar accompaniment; SU-G+C = guitar accompaniment and contextual cues; SU-G+V = guitar accompaniment with visual cues; PTA = pure-tone average; MTIQ = music training and involvement questionnaire. **p < .05, two-tailed. Open in new tab While the F score from the ANOVA of this model did not reveal a conventional statistically significant improvement when compared with a model based on the mean change score in the SU-G condition, the correlation between MTIQ and the SU-G change scores (r = .397) revealed that these two variables were related. MTIQ was the only predictor variable that showed a significant association with the SU-G condition t(24) = 2.33, p = .032. The difference in SU-G scores among these participants was associated with reported levels of music training/involvement. Participants with higher MTIQ scores were more successful in recognizing key words in sung sentences with guitar accompaniment. Research Question 4: Do Auditory Threshold, Age, Lipreading Ability, and Music Training Predict Performance on a Sung-Sentence Recognition Task With the Addition of Contextual Cues? The researcher entered each predictor variable into the model to determine how much of the variance in the SU-G+C change scores could be predicted. Table 6 shows the final model with regression coefficients. The combined findings of the regression analysis showed a nonsignificant statistical relationship between the independent variables and the SU-G+C change scores, F(4, 18) = 0.582, p = .679, R2 = .115, Predicted R2 = −.93. Research Question 5: Do Auditory Threshold, Age, Lipreading Ability, and Music Training Predict Performance on a Sung-Sentence Recognition Task With the Addition of Visual Cues? The researcher entered each predictor variable into the model to determine how much variance in the SU-G+V scores could be predicted. Table 6 shows the final model with regression coefficients. The combined findings of the regression analysis revealed a nonsignificant statistical relationship between the independent variables and the SU-G+V change scores, F(4, 17) = 0.806, p = .539, R2 = .159, Predicted R2 = −.63. Discussion In the current study, the inclusion of contextual and visual cues improved word recognition of sung sentences, with the greater benefit derived from visual cues. In addition, prior music training and involvement correlated most highly with sung-sentence recognition in the auditory-only listening condition. Finally, models incorporating age, average auditory threshold, music training/involvement, and lipreading accounted for 25.6% of the variance of scores in the SU-G condition, 11.5% in the SU-G+C condition, and 15.9% in the SU-G+V condition. However, the models were overfit and were not statistically significant. During this study, HA users had difficulty identifying key words in sung sentences that were presented with guitar accompaniment. Other instruments (e.g., piano) or instrument families (i.e., brass, woodwind, string, percussion) may result in different findings; however, Leek et al. (2008) found that HA users have previously reported difficulty with lyric recognition in general. Additionally, changes in the melodic analogs (e.g., different musical key and/or tempo) may affect the success of HA users on this task. Success based on these types of musical changes was not evaluated in this study. Listeners in this study were more successful when the test included contextual and visual cues. These findings are consistent with suggestions made by Rutledge (2009). Previous literature has shown that individuals possess varying levels of innate lipreading ability, which is directly related to speechreading (visual and auditory cues with speech; Tye-Murray, 2009). However, the degree to which lipreading ability is related to sing-reading ability (visual and auditory cues with singing) is unknown. Speech and singing are not the same; differences include how the respiratory system and vocal mechanism are utilized, the range of frequencies, the production rate, and phoneme duration (Jeffries, Fritz, & Braun, 2003; Lindblom & Sundberg, 2007; Strong & Plitnik, 2007). In the current study, lipreading scores were not highly correlated (r = 0.211, p = .347), with a degree of benefit observed for the SU-G+V condition. The observed correlation was in a positive direction, but the strength of the relationship may have been impacted by the sample size and any innate differences between visual cues present for speech and singing. Fine and Ginsborg (2014) have found that environment-related factors (e.g., visibility of the singer) and listener-related factors (e.g., attention and hearing ability) affect word intelligibility in vocal music. Listeners in the current study all had a similar configuration and degree of hearing loss, which may explain why the researcher found only a limited correlation between the listeners’ PTA and success on the listening task. Rather than considering vision acuity and cognition as variables in the present study, the researcher accounted for them through screening measures by excluding participants with cognitive impairments or reduced visual acuity. More nuanced tests of specific cognitive functioning, such as executive functioning, attention, and spatial intelligence, could have influenced the outcomes. The limited correlation between measured lipreading ability and benefit derived from the inclusion of visual cues may indicate that differences in other cognitive processes, such as working memory or attention, could have a greater impact than lipreading ability on recognition of sung sentences (Picou, Ricketts, & Hornsby, 2011). In addition, during SSRT pilot testing, some listeners identified experience with a particular passage topic (e.g., “owl” and prior knowledge of owls) that may have enhanced individual benefit from contextual cues. Further study is necessary to better assess the role of cognitive processing in song recognition tasks and to clarify differences among listeners that may contribute to variations in benefit gained from nonauditory cues. A positive correlation appeared to exist between the age of the listener and average auditory threshold, which aligns with previous research. In prior studies, listener age appears to be negatively correlated with sung-sentence intelligibility (Ginsborg et al., 2011; Heinrich et al., 2015). If the age range of listeners in this study had not been limited to 20 years, it is possible that a negative correlation would have been found between age and the listening task. When sentences were sung without any nonauditory cues, the listener’s amount of musical training/involvement correlated positively with recognition. This is consistent with prior research findings by Parbery-Clark et al. (2009) that persons with more music training are more accurate in perceiving speech in background noise. Similarly, Ginsberg et al. (2011) found that previous singing experience enhanced listeners’ success with identifying sung words and that all listeners were more successful with the identification task when listening for a second time. In the current study, most listeners were more successful during the second listening, regardless of the presence of nonauditory cues. Limitations of the Study Limitations to this study include the measured innate differences in melodies created from the original CST sentences, the potential that the increased benefit from the SU-G+V condition of the SSRT is attributable at least in part to individual characteristics of the singer, and the fact that the small sample size did not provide adequate power for individual variables to predict scores on the SSRT. In addition, the findings of this study cannot be generalized to persons who have hearing losses different from the participant selection criteria or to those individuals with other types of hearing devices (e.g., cochlear implants). In this study, even though the melodic analogs were based on the measured spectral content of stimuli from an existing standardized measure, the final melodies exhibited more variability and larger innate differences than the CST stimuli. The study design and SSRT structure accounted for these differences. Future studies using the SSRT would benefit from an examination of the precise nature and source of the innate differences among the stimuli. Additionally, if a different speaker/singer recorded the stimuli for the SSRT and Utley tests, the quality or number of facial cues while speaking and singing could differ from the study recordings. However, the Utley test scores in this study (range = 8–81, M = 38.77, SD = 22.00) are similar to previously published parameters (range = 0–84, M = 33.60, SD = 16.40; Utley, 1946) and the small sample size may explain the greater observed standard deviation. Finally, while this study was sufficiently powered to test the measured benefit between the first and second listening, the sample size was too small to reliably test the predictive ability of the independent variables for the SSRT. The researcher’s choice to base the power analysis on the observed difference between the first and second listening ensured that differences between the three presentation conditions could be detected. The large difference between the calculated R2 and Predicted R2 demonstrates a lack of predictive ability of the model and that the model is overfit. In this study, additional models with fewer variables were not investigated due to a lack of sufficient power for these comparisons. The variables that were studied are important to speech recognition and may also be for sung-sentence recognition, but the empirical analysis was complicated by variable interaction and noise in the data. More research is indicated since the model did not confirm the predictive nature of these variables and using a larger sample size could reveal additional variables that are meaningful for predicting success on sung-sentence recognition. This study sheds light on the complexity of the auditory and listening environment. The SSRT is a metric for an outcome of interest to music therapists in practice but it does not consider all of the variables that affect the client’s ability to hear and understand. Therefore, there is a need for further research. Clinical Implications and Conclusion Music therapists working with older adults can safely assume that many of these individuals will experience some degree of hearing loss, and there is no guarantee that those with hearing loss will use functional HAs. Regardless of HA use and previous experience with music, all adults with hearing loss may experience difficulty with interventions that require sung word recognition and identification. While contextual cues (e.g., pictures representing song topics) may help to prime previously existing knowledge in individuals (top–down processing), clear visual cues (e.g., view of the singer’s face) may provide even greater benefit. Although the use of contextual and visual cues within clinical settings involves conscious planning by the music therapist, these cues have the potential to increase the effectiveness of music-based interventions and can be provided at a minimal cost. However, most important, the effectiveness of music-based interventions could be increased. Within clinical settings, older adults with hearing loss may also present with comorbid conditions. While it is not possible to directly generalize the findings from this study to these individuals, it is anticipated that the inclusion of contextual and visual cues would not inhibit, and may in fact improve, the effectiveness of music interventions across individuals (i.e., Universal Design). Specific recommendations for music therapists to improve the listening environment and accessibility of sung lyrics include: ensure that individuals who wear glasses are wearing them and that the lenses are clean ensure that individuals who wear HAs or use another assistive listening device are using these devices and that they are functioning increase the intensity of the therapist’s voice by using a wireless microphone and speaker decrease the intensity of the accompanying musical instrument utilize proximity to increase the intensity of the therapist’s voice and accessibility of visual cues However, the effectiveness of the above recommendations should be investigated further and as always, clinicians should use their own clinical judgment and watch for instances where the inclusion of nonauditory cues may be contraindicated. In summary, the results from the current study provide preliminary evidence that older adult HA users benefit from contextual and visual cues for sung lyric recognition tasks and that music training/background is most influential when nonauditory cues are not present. Further work is required to investigate the specific mechanisms behind this finding, specifically which cognitive processes are responsible and whether individual listener characteristics exist that influence the benefit derived from contextual and visual cues when listening to sung sentences. Funding This research was supported in part through financial support from Dr. Richard and Mrs. Ellen Caplan in addition to a Graduate Student Research Grant from the Graduate College at The University of Iowa awarded in 2015. Conflicts of interest: None declared. The author would like to thank Elizabeth Stangl, Kelsey Dumanch, and Lisa Brody for assistance in data collection; Virginia Driscoll for assistance in programming the test stimuli; Professors Richard Hurtig, Mary Adamek, and Elizabeth Walker for assistance in developing the test stimuli; and all of the participants who took part in this study. This study was completed in partial fulfillment of the degree of Doctor of Philosophy at The University of Iowa. As a colleague at Colorado State University, the Editor-in-Chief did not participate in the peer review process for this manuscript. Associate Editor Olivia Swedberg Yinger served as Sr. Editor for the management of the peer review process and related decisions. References Alain , C. , Dyson , B. J., & Snyder , J. S. ( 2006 ). Aging and the perceptual organization of sounds: A change of scene? In P. Michael Conn (Ed.), Handbook of models for human aging (pp. 759 – 769 ). New York: Elsevier Academic Press . Retrieved from http://faculty.unlv.edu/jsnyder/documents/Alain_Dyson_Snyder_2006_P369391-Ch64.pdf Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC American Music Therapy Association . ( 2018 ). 2018 AMTA member survey and workforce analysis . Retrieved from https://www.musictherapy.org/assets/1/7/18WorkforceAnalysis.pdf Beck , D. , & Clark , J. ( 2009 ). Audition matters more as cognition declines: Cognition matters more as audition declines . Audiology Today, 21 ( 2 ), 48 – 59 . Retrieved from http://www.audiology.org/news/editorial/Pages/20090320a.aspx?PF=1 Belgrave , M. , Darrow , A. A., Walworth , D., & Wlodarczyk , N. ( 2011 ). Music therapy and geriatric populations: A handbook for practicing music therapists and healthcare professionals . Silver Spring, MD : American Music Therapy Association. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Bergman , A . ( 1990 ). Auditory scene analysis: The perceptual organisation of sound . Boston: MIT Press. Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Chasin , M. , & Hockley , N. S. ( 2014 ). Some characteristics of amplified music through hearing aids . Hearing Research, 308 , 2 – 12 . doi:10.1016/j.heares.2013.07.003 Google Scholar Crossref Search ADS WorldCat Chen , D. S. , Genther , D. J., Betz , J., & Lin , F. R. ( 2014 ). Association between hearing impairment and self-reported difficulty in physical functioning . Journal of the American Geriatrics Society, 62 ( 5 ), 850 – 856 . doi:10.1111/jgs.12800 Google Scholar Crossref Search ADS WorldCat Ciorba , A. , Bianchini , C., Pelucchi , S., & Pastore , A. ( 2012 ). The impact of hearing loss on the quality of life of elderly adults . Clinical Interventions in Aging, 7 , 159 – 163 . doi:10.2147/CIA.S26059 Google Scholar Crossref Search ADS WorldCat Clair , A. A. , & Memmot , J. ( 2008 ). Therapeutic uses of music with older adults . Silver Spring, MD : American Music Therapy Association . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Cohen , G. D. , Perlstein , S., Chapline , J., Kelly , J., Firth , K. M., & Simmens , S. ( 2006 ). The impact of professionally conducted cultural programs on the physical health, mental health, and social functioning of older adults . The Gerontologist, 46 ( 6 ), 726 – 734 . doi:10.1093/geront/46.6.726 Google Scholar Crossref Search ADS WorldCat Condit-Schultz , N. , & Huron , D. ( 2017 ). Word intelligibility in multi-voice singing: the influence of chorus size . Journal of Voice, 31 ( 1 ), 121.e1 – 121.e8 . doi:10.1016/j.jvoice.2016.02.011 Google Scholar Crossref Search ADS WorldCat Correia , R. , Barroso , J., & Nieto , A. ( 2018 ). Age related cognitive changes: The importance of modulating factors . Journal of Geriatric Medicine and Gerontology, 4 ( 2 ), 1 – 10 . doi:10.23937/2469-5858/1510048 OpenURL Placeholder Text WorldCat Cox , R. M. , Alexander , G. C., & Gilmore , C. ( 1987 ). Development of the connected Speech Test (CST) . Ear and Hearing, 8 ( 5 Suppl ), 119S – 126S . doi:10.1097/00003446-198710001-00010 Google Scholar Crossref Search ADS WorldCat Cox , R. M. , Alexander , G. C., Gilmore , C., & Pusakulich , K. M. ( 1988 ). Use of the Connected Speech Test (CST) with hearing-impaired listeners . Ear and Hearing, 9 ( 4 ), 198 – 207 . doi:10.1097/00003446-198808000-00005 Google Scholar Crossref Search ADS WorldCat Cox , R. M. , Alexander , G. C., Gilmore , C., & Pusakulich , K. M. ( 1989 ). The Connected Speech Test version 3: audiovisual administration . Ear and Hearing, 10 ( 1 ), 29 – 32 . doi:10.1097/00003446-198902000-00005 Google Scholar Crossref Search ADS WorldCat Creech , A. , Hallam , S., McQueen , H., & Varvarigou , M. ( 2013 ). The power of music in the lives of older adults . Research Studies in Music Education, 35 , 87 – 102 . doi:10.1177/1321103X13478862 Google Scholar Crossref Search ADS WorldCat Dalton , D. S. , Cruickshanks , K. J., Klein , B. E., Klein , R., Wiley , T. L., & Nondahl , D. M. ( 2003 ). The impact of hearing loss on quality of life in older adults . The Gerontologist, 43 ( 5 ), 661 – 668 . doi:10.1093/geront/43.5.661 Google Scholar Crossref Search ADS WorldCat Dicarlo , L. M. , & Kataja , R. ( 1951 ). An analysis of the Utley lipreading test . The Journal of Speech Disorders, 16 ( 3:1 ), 226 – 240 . doi:10.1044/jshd.1603.226 Google Scholar Crossref Search ADS WorldCat Faul , F. , Erdfelder , E., Lang , A. G., & Buchner , A. ( 2007 ). G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences . Behavior Research Methods, 39 ( 2 ), 175 – 191 . doi:10.3758/bf03193146 Google Scholar Crossref Search ADS WorldCat Fine , P. A. , & Ginsborg , J. ( 2014 ). Making myself understood: perceived factors affecting the intelligibility of sung text . Frontiers in Psychology, 5 , 809 . doi:10.3389/fpsyg.2014.00809 Google Scholar Crossref Search ADS WorldCat Gfeller , K. , Christ , A., Knutson , J. F., Witt , S., Murray , K. T., & Tyler , R. S. ( 2000 ). Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients . Journal of the American Academy of Audiology, 11 ( 7 ), 390 – 406 . PMID: 10976500. OpenURL Placeholder Text WorldCat Gfeller , K. E. , Darrow , A. A., & Wilhelm , L. A. ( 2018 ). Music therapy for persons with sensory loss. In A. Knight, B. LaGasse, and A. Clair (Eds.), Music therapy: An introduction to the profession (pp. 217 – 245 ). Silver Spring, MD : American Music Therapy Association . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Ginsborg , J. , Fine , P., & Barlow , C. ( 2011 ). Have we made ourselves clear? Singers’ and non-singers’ perceptions of the intelligibility of sung text. In A. Williamon, D. Edwards, & L. Bartel (Eds.), Proceedings of the international symposium on performance science (pp. 111 – 116 ). Utrecht, The Netherlands : European Association of Conservatoires (AEC) . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Heinrich , A. , Knight , S., & Hawkins , S. ( 2015 ). Influences of word predictability and type of masker noise on intelligibility of sung text in live concerts . The Journal of the Acoustical Society of America, 138 ( 4 ), 2373 – 2386 . doi:10.1121/1.4929901 Google Scholar Crossref Search ADS WorldCat Jeffries , K. J. , Fritz , J. B., & Braun , A. R. ( 2003 ). Words in melody: An H 215 O PET study of brain activation during singing and speaking . Brain Imaging, 14 (5). doi:10.1097/01.wnr.0000066198.94941.a4 OpenURL Placeholder Text WorldCat Jesse , A. , & Massaro , D. W. ( 2010 ). Seeing a singer helps comprehension of the song’s lyrics . Psychonomic Bulletin & Review, 17 ( 3 ), 323 – 328 . doi:10.3758/PBR.17.3.323 Google Scholar Crossref Search ADS WorldCat Johnson , R. B. , Huron , D., & Collister , L. ( 2014 ). Music and lyrics interactions and their influence on recognition of sung words: an investigation of word frequency, rhyme, metric stress, vocal timbre, melisma, and repetition priming . Empirical Musicology Review, 9 , 2 – 20 . doi:10.18061/emr.v9i1 Google Scholar Crossref Search ADS WorldCat Kleinbaum , D. G. , Kupper , L. L., Nizam , A., & Muller , K. E. ( 2008 ). Applied regression analysis and multivariable methods (4th ed.). Boston, MA : Cengage Learning . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Krout , R. E . ( 2007 ). The attraction of the guitar as an instrument of motivation, preference, and choice for use with clients in music therapy: A review of literature . The Arts in Psychotherapy, 34 , 36 – 52 . doi:10.1016/j.aip.2006.08.005 OpenURL Placeholder Text WorldCat Laukka , P . ( 2007 ). Uses of music and psychological well-being among the elderly . Journal of Happiness Studies, 8 , 215 – 241 . doi:10.1007/s10902-006-9024-3 Google Scholar Crossref Search ADS WorldCat Leek , M. R. , Molis , M. R., Kubli , L. R., & Tufts , J. B. ( 2008 ). Enjoyment of music by elderly hearing-impaired listeners . Journal of the American Academy of Audiology, 19 ( 6 ), 519 – 526 . doi:10.3766/jaaa.19.6.7 Google Scholar Crossref Search ADS WorldCat Lindblom , B. , & Sundberg , J ( 2007 ). The human voice in speech and singing. In T. Rossing (Ed.), Springer handbook of acoustics (pp.669-712). NY: Springer. OpenURL Placeholder Text WorldCat Lin , F. R. , Thorpe , R., Gordon-Salant , S., & Ferrucci , L. ( 2011 ). Hearing loss prevalence and risk factors among older adults in the United States . The Journals of Gerontology. Series A, Biological Sciences and Medical Sciences, 66 , 582 – 90 . doi:10.1093/gerona/glr002 Google Scholar Crossref Search ADS WorldCat Lin , F. R. , Yaffe , K., Xia , J., Xue , W-L., Harris , T. B., Purchase-Helzner , E., … Simonsick , E. M., ( 2013 ). Hearing loss and cognitive decline among older adults . JAMA Internal Medicine, 173 , 293 – 299 . doi:10.1001/jamainternmed.2013.1868 Google Scholar Crossref Search ADS WorldCat Lotfi , Y. , Mehrkian , S., Moossavi , A., & Faghih-Zadeh , S. ( 2009 ). Quality of life improvement in hearing-impaired elderly people after wearing a hearing aid . Archives of Iranian Medicine, 12 ( 4 ), 365 – 370 . PMID: 19566353. OpenURL Placeholder Text WorldCat Nasreddine , Z. S. , Phillips , N. A., Bédirian , V., Charbonneau , S., Whitehead , V., Collin , I.,… Chertkow , H. ( 2005 ). The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment . Journal of the American Geriatrics Society, 53 ( 4 ), 695 – 699 . doi:10.1111/j.1532-5415.2005.53221.x Google Scholar Crossref Search ADS WorldCat National Institute on Deafness and Other Communication Disorders ( NIDCD) . ( 2014 ). Age-related hearing loss . Retrieved from http://www.nidcd.nih.gov/health/hearing/Pages/Age-Related-HearingLoss.aspx Parbery-Clark , A. , Skoe , E., & Kraus , N. ( 2009 ). Musical experience limits the degradative effects of background noise on the neural processing of sound . The Journal of Neuroscience, 29 ( 45 ), 14100 – 14107 . doi:10.1523/JNEUROSCI.3256-09.2009 Google Scholar Crossref Search ADS WorldCat Picou , E. M. , Ricketts , T. A., & Hornsby , B. W. ( 2011 ). Visual cues and listening effort: Individual variability . Journal of Speech, Language, and Hearing Research, 54 ( 5 ), 1416 – 1430 . doi:10.1044/1092-4388(2011/10-0154) Google Scholar Crossref Search ADS WorldCat Prickett , C. A . ( 2000 ). Music therapy for older people: Research comes of age across two decades. In Effectiveness of music therapy procedures: Documentation of research and clinical practice (3rd ed.), pp. 297 – 232 . Silver Spring, MD : American Music Therapy Association . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Rowe , J. W. , Berkman , L., Fried , L., Fulmer , T., Jackson , J., Naylor , M., … Stone , R. ( 2016 ). Preparing for better health and healthcare for an aging population: A vital direction for health and health care (Discussion Paper). National Academy of Medicine, Washington, DC. doi:10.31478/201609n. Retrieved from: https://nam.edu/preparing-for-better-health-and-health-care-for-an-aging-population-a-vital-direction-for-health-and-health-care/ Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Rutledge , K. L . ( 2009 ). A music listening questionnaire for hearing aid users (Unpublished master’s thesis). The University of Canterbury, Christchurch, New Zealand . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Schumm , W. R. , Pratt , K. K., Hartenstein , J. L., Jenkins , B. A., & Johnson , G. A. ( 2013 ). Determining statistical significance (alpha) and reporting statistical trends: Controversies, issues, and facts . Comprehensive Psychology , 2 , 10 . doi:10.2466/03.CP.2.10 Google Scholar Crossref Search ADS WorldCat Studebaker , G. A . ( 1985 ). A “rationalized” arcsine transform . Journal of Speech and Hearing Research, 28 ( 3 ), 455 – 462 . doi:10.1044/jshr.2803.455 Google Scholar Crossref Search ADS WorldCat Strong , W. J. , & Plitnik , G. R. ( 2007 ). Music speech audio (3rd ed.). Provo, UT: BYU Academic Publishing. Tye-Murray , N . ( 2009 ). Foundations of aural rehabilitation: Children, adults, and their family members (3rd ed.). Clifton Park, NY : Delmar Cengage Learning . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Utley , J . ( 1946 ). A test of lip reading ability . The Journal of Speech Disorders, 11 , 109 – 116 . doi:10.1044/jshd.1102.109 Google Scholar Crossref Search ADS WorldCat Weinstein , B. E . ( 2012 ). Geriatric audiology (2nd ed.). New York, NY : Thieme Medical Publishers . Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Yinger , O. S . ( 2014 ). Adapting choral singing experiences for older adults: The implications of sensory, perceptual, and cognitive changes . International Journal of Music Education, 32 , 203 – 212 . doi:10.1177/0255761413508064 Google Scholar Crossref Search ADS WorldCat Footnotes 1 Scores from 26 to 30 on the Montreal-Cognitive Assessment are interpreted as indicative of normal cognition. © The Author(s) 2020. Published by Oxford University Press on behalf of American Music Therapy Association. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Using Contextual and Visual Cues to Improve Sung Word Recognition in Hearing Aid Users JF - Journal of Music Therapy DO - 10.1093/jmt/thaa009 DA - 2020-05-01 UR - https://www.deepdyve.com/lp/oxford-university-press/using-contextual-and-visual-cues-to-improve-sung-word-recognition-in-46jLRpJpBS DP - DeepDyve ER -