Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost

Qi Zhao, Christof Koch

Research output: Contribution to journalArticlepeer-review

75 Scopus citations

Abstract

To predict where subjects look under natural viewing conditions, biologically inspired saliency models decompose visual input into a set of feature maps across spatial scales. The output of these feature maps are summed to yield the final saliency map. We studied the integration of bottom-up feature maps across multiple spatial scales by using eye movement data from four recent eye tracking datasets. We use AdaBoost as the central computational module that takes into account feature selection, thresholding, weight assignment, and integration in a principled and nonlinear learning framework. By combining the output of feature maps via a series of nonlinear classifiers, the new model consistently predicts eye movements better than any of its competitors.

Original languageEnglish (US)
Article number22
JournalJournal of vision
Volume12
Issue number6
DOIs
StatePublished - 2012

Keywords

  • AdaBoost
  • Computational saliency model
  • Feature integration

Fingerprint

Dive into the research topics of 'Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost'. Together they form a unique fingerprint.

Cite this