Spatial context learning in visual search and change detection

Yuhong Jiang, Joo Hyun Song

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.

Original languageEnglish (US)
Pages (from-to)1128-1139
Number of pages12
JournalPerception and Psychophysics
Volume67
Issue number7
DOIs
StatePublished - Oct 2005

Fingerprint

Dive into the research topics of 'Spatial context learning in visual search and change detection'. Together they form a unique fingerprint.

Cite this