Audio-visual emotion recognition in adult attachment interview

Zhihong Zeng, Yuxiao Hu, Yun Fu, Thomas S. Huang, Glenn I. Roisman, Zhen Wen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Scopus citations

Abstract

Automatic multimodal recognition of spontaneous affective expressions is a largely unexplored and challenging problem. In this paper, we explore audio-visual emotion recognition in a realistic human conversation setting - Adult Attachment Interview (AAI). Based on the assumption that facial expression and vocal expression be at the same coarse affective states, positive and negative emotion sequences are labeled according to Facial Action Coding System Emotion Codes. Facial texture in visual channel and prosody in audio channel are integrated in the framework of Adaboost multi-stream hidden Markov model (AMHMM) in which Adaboost learning scheme is used to build component HMM fusion. Our approach is evaluated in the preliminary AAI spontaneous emotion recognition experiments.

Original languageEnglish (US)
Title of host publicationICMI'06
Subtitle of host publication8th International Conference on Multimodal Interfaces, Conference Proceedings
Pages139-145
Number of pages7
DOIs
StatePublished - 2006
Externally publishedYes
EventICMI'06: 8th International Conference on Multimodal Interfaces - Banff, AB, Canada
Duration: Nov 2 2006Nov 4 2006

Publication series

NameICMI'06: 8th International Conference on Multimodal Interfaces, Conference Proceeding

Other

OtherICMI'06: 8th International Conference on Multimodal Interfaces
Country/TerritoryCanada
CityBanff, AB
Period11/2/0611/4/06

Keywords

  • Affect recognition
  • Affective computing
  • Emotion recognition
  • Multimodal human-computer interaction

Fingerprint

Dive into the research topics of 'Audio-visual emotion recognition in adult attachment interview'. Together they form a unique fingerprint.

Cite this