Abstract
In this paper, we present a novel model for saliency prediction under a unified framework of feature integration. The model distinguishes itself by directly learning from natural images and automatically incorporating higher-level semantic information in a scalable manner for gaze prediction. Unlike most existing saliency models that rely on specific features or object detectors, our model learns multiple stages of features that mimic the hierarchical organization of the ventral stream in the visual cortex and integrate them by adapting their weights based on the ground-truth fixation data. To accomplish this, we utilize a multi-layer sparse network to learn low-, mid- and high-level features from natural images and train a linear support vector machine (SVM) for weight adaption and feature integration. Experimental results show that our model could learn high-level semantic features like faces and texts and can perform competitively among existing approaches in predicting eye fixations.
Original language | English (US) |
---|---|
Pages (from-to) | 61-68 |
Number of pages | 8 |
Journal | Neurocomputing |
Volume | 138 |
DOIs | |
State | Published - Aug 22 2014 |
Bibliographical note
Funding Information:The work is supported by the Singapore Ministry of Education Academic Research Fund Tier 1 (No.R-263-000-648-133).
Keywords
- Deep learning
- Gaze prediction
- Semantic saliency
- Sparse coding