Gender Differences in Identifying Facial, Prosodic, and Semantic Emotions Show Category- and Channel-Specific Effects Mediated by Encoder's Gender

Yi Lin, Hongwei Ding, Yang Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Purpose The nature of gender differences in emotion processing has remained unclear due to the discrepancies in existing literature. This study examined the modulatory effects of emotion categories and communication channels on gender differences in verbal and nonverbal emotion perception. Method Eighty-eight participants (43 females and 45 males) were asked to identify three basic emotions (i.e., happiness, sadness, and anger) and neutrality encoded by female or male actors from verbal (i.e., semantic) or nonverbal (i.e., facial and prosodic) channels. Results While women showed an overall advantage in performance, their superiority was dependent on specific types of emotion and channel. Specifically, women outperformed men in regard to two basic emotions (happiness and sadness) in the nonverbal channels and only the anger category with verbal content. Conversely, men did better for the anger category in the nonverbal channels and for the other two emotions (happiness and sadness) in verbal content. There was an emotion- and channel-specific interaction effect between the two types of gender differences, with male subjects showing higher sensitivity to sad faces and prosody portrayed by the female encoders. Conclusion These findings reveal explicit emotion processing as a highly dynamic complex process with significant gender differences tied to specific emotion categories and communication channels. Supplemental Material https://doi.org/10.23641/asha.15032583.
Original languageEnglish (US)
Pages (from-to)2941-2955
Number of pages15
JournalJournal of Speech, Language, and Hearing Research
Volume64
Issue number8
DOIs
StatePublished - Aug 9 2021

Bibliographical note

Funding Information:
H.W.D. and Y.Z. were supported by the major program of National Fund of Philosophy and Social Science of China (18ZDA293). Y.Z. additionally received support from the University of Minnesota’s Brain Imaging Grant and Grand Challenges Exploratory Research Grant for international collaboration. We also thank the editor for providing insightful comments and suggestions.

Publisher Copyright:
© 2021 American Speech-Language-Hearing Association.

PubMed: MeSH publication types

  • Journal Article
  • Research Support, Non-U.S. Gov't

Fingerprint

Dive into the research topics of 'Gender Differences in Identifying Facial, Prosodic, and Semantic Emotions Show Category- and Channel-Specific Effects Mediated by Encoder's Gender'. Together they form a unique fingerprint.

Cite this