Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

What is the Melody of That Voice? Probing Unbiased Recognition Accuracy with the Montreal Affective Voices

What is the Melody of That Voice? Probing Unbiased Recognition Accuracy with the Montreal... The present study aimed to clarify how listeners decode emotions from human nonverbal vocalizations, exploring unbiased recognition accuracy of vocal emotions selected from the Montreal Affective Voices (MAV) (Belin et al. in Trends Cognit Sci 8:129–135, 2008. doi: 10.1016/j.tics.2004.01.008 ). The MAV battery includes 90 nonverbal vocalizations expressing anger, disgust, fear, pain, sadness, surprise, happiness, sensual pleasure, as well as neutral expressions, uttered by female and male actors. Using a forced-choice recognition task, 156 native speakers of Portuguese were asked to identify the emotion category underlying each MAV sound, and additionally to rate the valence, arousal and dominance of these sounds. The analysis focused on unbiased hit rates (Hu Score; Wagner in J Nonverbal Behav 17(1):3–28, 1993. doi: 10.1007/BF00987006 ), as well as on the dimensional ratings for each discrete emotion. Further, we examined the relationship between categorical and dimensional ratings, as well as the effects of speaker’s and listener’s sex on these two types of assessment. Surprise vocalizations were associated with the poorest accuracy, whereas happy vocalizations were the most accurately recognized, contrary to previous studies. Happiness was associated with the highest valence and dominance ratings, whereas fear elicited the highest arousal ratings. Recognition accuracy and dimensional ratings of vocal expressions were dependent both on speaker’s sex and listener’s sex. Further, discrete vocal emotions were not consistently predicted by dimensional ratings. Using a large sample size, the present study provides, for the first time, unbiased recognition accuracy rates for a widely used battery of nonverbal vocalizations. The results demonstrated a dynamic interplay between listener’s and speaker’s variables (e.g., sex) in the recognition of emotion from nonverbal vocalizations. Further, they support the use of both categorical and dimensional accounts of emotion when probing how emotional meaning is decoded from nonverbal vocal cues. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Nonverbal Behavior Springer Journals

What is the Melody of That Voice? Probing Unbiased Recognition Accuracy with the Montreal Affective Voices

Loading next page...
 
/lp/springer-journals/what-is-the-melody-of-that-voice-probing-unbiased-recognition-accuracy-loQPvE0Tiy

References (83)

Publisher
Springer Journals
Copyright
Copyright © 2017 by Springer Science+Business Media New York
Subject
Psychology; Personality and Social Psychology; Sociology, general; Social Sciences, general
ISSN
0191-5886
eISSN
1573-3653
DOI
10.1007/s10919-017-0253-4
Publisher site
See Article on Publisher Site

Abstract

The present study aimed to clarify how listeners decode emotions from human nonverbal vocalizations, exploring unbiased recognition accuracy of vocal emotions selected from the Montreal Affective Voices (MAV) (Belin et al. in Trends Cognit Sci 8:129–135, 2008. doi: 10.1016/j.tics.2004.01.008 ). The MAV battery includes 90 nonverbal vocalizations expressing anger, disgust, fear, pain, sadness, surprise, happiness, sensual pleasure, as well as neutral expressions, uttered by female and male actors. Using a forced-choice recognition task, 156 native speakers of Portuguese were asked to identify the emotion category underlying each MAV sound, and additionally to rate the valence, arousal and dominance of these sounds. The analysis focused on unbiased hit rates (Hu Score; Wagner in J Nonverbal Behav 17(1):3–28, 1993. doi: 10.1007/BF00987006 ), as well as on the dimensional ratings for each discrete emotion. Further, we examined the relationship between categorical and dimensional ratings, as well as the effects of speaker’s and listener’s sex on these two types of assessment. Surprise vocalizations were associated with the poorest accuracy, whereas happy vocalizations were the most accurately recognized, contrary to previous studies. Happiness was associated with the highest valence and dominance ratings, whereas fear elicited the highest arousal ratings. Recognition accuracy and dimensional ratings of vocal expressions were dependent both on speaker’s sex and listener’s sex. Further, discrete vocal emotions were not consistently predicted by dimensional ratings. Using a large sample size, the present study provides, for the first time, unbiased recognition accuracy rates for a widely used battery of nonverbal vocalizations. The results demonstrated a dynamic interplay between listener’s and speaker’s variables (e.g., sex) in the recognition of emotion from nonverbal vocalizations. Further, they support the use of both categorical and dimensional accounts of emotion when probing how emotional meaning is decoded from nonverbal vocal cues.

Journal

Journal of Nonverbal BehaviorSpringer Journals

Published: Apr 6, 2017

There are no references for this article.