Abstract
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
Original language | English (US) |
---|---|
Pages (from-to) | 896-912 |
Number of pages | 16 |
Journal | Journal of speech, language, and hearing research : JSLHR |
Volume | 63 |
Issue number | 3 |
DOIs | |
State | Published - Mar 23 2020 |
Bibliographical note
Funding Information:This study was supported by grants (awarded to Ding and Zhang) from the major project of National Social Science Foundation of China (18ZDA293). Zhang additionally received support for international exchange and research from the University of Minnesota’s Grand Challenges Exploratory Research Grant.
Publisher Copyright:
© 2020 American Speech-Language-Hearing Association.
PubMed: MeSH publication types
- Research Support, Non-U.S. Gov't
- Journal Article