Learning visual saliency based on object's relative relationship

Senlin Wang, Qi Zhao, Mingli Song, Jiajun Bu, Chun Chen, Dacheng Tao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

As a challenging issue in both computer vision and psychological research, visual attention has arouse a wide range of discussions and studies in recent years. However, conventional computational models mainly focus on low-level information, while high-level information and their interrelationship are ignored. In this paper, we stress the issue of relative relationship between high-level information, and a saliency model based on low-level and high-level analysis is also proposed. Firstly, more than 50 categories of objects are selected from nearly 800 images in MIT data set[1], and concrete quantitative relationship is learned based on detail analysis and computation. Secondly, using the least square regression with constraints method, we demonstrate an optimal saliency model to produce saliency maps. Experimental results indicate that our model outperforms several state-of-art methods and produces better matching to human eye-tracking data.

Original languageEnglish (US)
Title of host publicationNeural Information Processing - 19th International Conference, ICONIP 2012, Proceedings
Pages318-327
Number of pages10
EditionPART 5
DOIs
StatePublished - 2012
Event19th International Conference on Neural Information Processing, ICONIP 2012 - Doha, Qatar
Duration: Nov 12 2012Nov 15 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 5
Volume7667 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other19th International Conference on Neural Information Processing, ICONIP 2012
CountryQatar
CityDoha
Period11/12/1211/15/12

Keywords

  • High-level Information
  • Low-level Information
  • Relative relationship
  • Visual attention

Fingerprint Dive into the research topics of 'Learning visual saliency based on object's relative relationship'. Together they form a unique fingerprint.

Cite this