Dynamics of processing invisible faces in the brain: Automatic neural encoding of facial expression information

Yi Jiang, Robert W. Shannon, Nathalie Vizueta, Edward M. Bernat, Christopher J. Patrick, Sheng He

Research output: Contribution to journalArticlepeer-review

94 Scopus citations


The fusiform face area (FFA) and the superior temporal sulcus (STS) are suggested to process facial identity and facial expression information respectively. We recently demonstrated a functional dissociation between the FFA and the STS as well as correlated sensitivity of the STS and the amygdala to facial expressions using an interocular suppression paradigm [Jiang, Y., He, S., 2006. Cortical responses to invisible faces: dissociating subsystems for facial-information processing. Curr. Biol. 16, 2023-2029.]. In the current event-related brain potential (ERP) study, we investigated the temporal dynamics of facial information processing. Observers viewed neutral, fearful, and scrambled face stimuli, either visibly or rendered invisible through interocular suppression. Relative to scrambled face stimuli, intact visible faces elicited larger positive P1 (110-130 ms) and larger negative N1 or N170 (160-180 ms) potentials at posterior occipital and bilateral occipito-temporal regions respectively, with the N170 amplitude significantly greater for fearful than neutral faces. Invisible intact faces generated a stronger signal than scrambled faces at 140-200 ms over posterior occipital areas whereas invisible fearful faces (compared to neutral and scrambled faces) elicited a significantly larger negative deflection starting at 220 ms along the STS. These results provide further evidence for cortical processing of facial information without awareness and elucidate the temporal sequence of automatic facial expression information extraction.

Original languageEnglish (US)
Pages (from-to)1171-1177
Number of pages7
Issue number3
StatePublished - Feb 1 2009

Bibliographical note

Funding Information:
This research was supported by the James S. McDonnell Foundation, the National Institutes of Health Grants R01 EY015261-01, P50 MH072850, and P30 NS057091, and the National Institute of Child Health and Human Development Grant T32 HD007151. Y.J. was also supported by the Eva O. Miller Fellowship, the Neuroengineering Fellowship, and the Doctoral Dissertation Fellowship from the University of Minnesota.


  • Awareness
  • Event-related potential (ERP)
  • Face
  • Facial expression


Dive into the research topics of 'Dynamics of processing invisible faces in the brain: Automatic neural encoding of facial expression information'. Together they form a unique fingerprint.

Cite this