Learning to predict eye fixations for semantic contents using multi-layer sparse network

Chengyao Shen, Qi Zhao

Research output: Contribution to journalArticlepeer-review

40 Scopus citations


In this paper, we present a novel model for saliency prediction under a unified framework of feature integration. The model distinguishes itself by directly learning from natural images and automatically incorporating higher-level semantic information in a scalable manner for gaze prediction. Unlike most existing saliency models that rely on specific features or object detectors, our model learns multiple stages of features that mimic the hierarchical organization of the ventral stream in the visual cortex and integrate them by adapting their weights based on the ground-truth fixation data. To accomplish this, we utilize a multi-layer sparse network to learn low-, mid- and high-level features from natural images and train a linear support vector machine (SVM) for weight adaption and feature integration. Experimental results show that our model could learn high-level semantic features like faces and texts and can perform competitively among existing approaches in predicting eye fixations.

Original languageEnglish (US)
Pages (from-to)61-68
Number of pages8
StatePublished - Aug 22 2014

Bibliographical note

Funding Information:
The work is supported by the Singapore Ministry of Education Academic Research Fund Tier 1 (No.R-263-000-648-133).


  • Deep learning
  • Gaze prediction
  • Semantic saliency
  • Sparse coding


Dive into the research topics of 'Learning to predict eye fixations for semantic contents using multi-layer sparse network'. Together they form a unique fingerprint.

Cite this