Anticipating Where People will Look Using Adversarial Networks

Mengmi Zhang, Keng Teck Ma, Joo Hwee Lim, Qi Zhao, Jiashi Feng

Research output: Contribution to journalArticle

3 Scopus citations

Abstract

We introduce a new problem of gaze anticipation on future frames which extends the conventional gaze prediction problem to go beyond current frames. To solve this problem, we propose a new generative adversarial network based model, Deep Future Gaze (DFG), encompassing two pathways: DFG-P is to anticipate gaze prior maps conditioned on the input frame which provides task influences; DFG-G is to learn to model both semantic and motion information in future frame generation. DFG-P and DFG-G are then fused to anticipate future gazes. DFG-G consists of two networks: a generator and a discriminator. The generator uses a two-stream spatial-temporal convolution architecture (3D-CNN) for explicitly untangling the foreground and background to generate future frames. It then attaches another 3D-CNN for gaze anticipation based on these synthetic frames. The discriminator plays against the generator by distinguishing the synthetic frames of the generator from the real frames. Experimental results on the publicly available egocentric and third person video datasets show that DFG significantly outperforms all competitive baselines. We also demonstrate that DFG achieves better performance of gaze prediction on current frames in egocentric and third person videos than state-of-the-art methods.

Original languageEnglish (US)
Article number8471119
Pages (from-to)1783-1796
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number8
DOIs
StatePublished - Aug 1 2019

Keywords

  • Egocentric videos
  • gaze anticipation
  • generative adversarial network
  • saliency
  • visual attention

PubMed: MeSH publication types

  • Journal Article
  • Research Support, Non-U.S. Gov't

Fingerprint Dive into the research topics of 'Anticipating Where People will Look Using Adversarial Networks'. Together they form a unique fingerprint.

  • Cite this