Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.
|Original language||English (US)|
|Title of host publication||2015 International Conference on Computer Vision, ICCV 2015|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||9|
|State||Published - Feb 17 2015|
|Event||15th IEEE International Conference on Computer Vision, ICCV 2015 - Santiago, Chile|
Duration: Dec 11 2015 → Dec 18 2015
|Name||Proceedings of the IEEE International Conference on Computer Vision|
|Volume||2015 International Conference on Computer Vision, ICCV 2015|
|Other||15th IEEE International Conference on Computer Vision, ICCV 2015|
|Period||12/11/15 → 12/18/15|
Bibliographical notePublisher Copyright:
© 2015 IEEE.