Brain mechanisms for processing co-speech gesture: A cross-language study of spatial demonstratives

James Stevens, Yang Zhang

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

This electrophysiological study investigated the relationship between language and nonverbal socio-spatial context for demonstrative use in speech communication. Adult participants from an English language group and a Japanese language group were asked to make congruency judgment for simultaneous presentation of an audio demonstrative phrase in their native language and a picture that included two human figures as speaker and hearer, as well as a referent object in different spatial arrangements. The demonstratives ("this" and "that" in English, and "ko," "so," and "a" in Japanese) were varied for the visual scenes to produce expected and unexpected combinations to refer to an object based on its relative spatial distances to the speaker and hearer. Half of the trials included an accompanying pointing gesture in the picture, and the other half did not. Behavioral data showed robust congruency effects with longer reaction time for the incongruent trials in both subject groups irrespective of the presence or absence of the pointing gesture. Both subject groups also showed a significant N400-like congruency effect in the event-related potential responses for the gesture trials, a finding predicted from previous work (Stevens & Zhang, 2013). In the no-gesture trials, the English data alone showed a P600 congruency effect preceded by a negative deflection. These results provide evidence for shared brain mechanisms for processing demonstrative expression congruency, as well as language-specific neural sensitivity to encoding the co-expressivity of gesture and speech.

Original languageEnglish (US)
Pages (from-to)27-47
Number of pages21
JournalJournal of Neurolinguistics
Volume30
Issue number1
DOIs
StatePublished - Jul 2014

Bibliographical note

Funding Information:
This work was supported by University of Minnesota start-up fund, an Autism Initiative Project Award from the Department of Pediatrics, and three Brain Imaging Research Awards from the Office of the Associate Dean for Research and Graduate Programs, College of Liberal Arts. Additional funding for subject fees was provided by the UMN Linguistics Program . We gratefully acknowledge invaluable assistance from Tess Koerner, Sharon Miller and Dr. Edward Carney.

Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

Keywords

  • Gesture
  • Monitoring theory
  • Mutually adaptive modalities hypothesis
  • Speech

Fingerprint Dive into the research topics of 'Brain mechanisms for processing co-speech gesture: A cross-language study of spatial demonstratives'. Together they form a unique fingerprint.

Cite this