2D observers for human 3D object recognition?

Zili Liu, Daniel Kersten

Research output: Contribution to journalArticle

17 Scopus citations

Abstract

In human object recognition, converging evidence has shown that subjects' performance depends on their familiarity with an object's appearance. The extent of such dependence is a function of the inter-object similarity. The more similar the objects are, the stronger this dependence will be and the more dominant the two-dimensional (2D) image-based information will be. However, the degree to which three-dimensional (3D) model-based information is used remains an area of strong debate. Previously the authors showed that all models with independent 2D templates that allowed 2D rotations in the image plane cannot account for human performance in discriminating novel object views. Here the authors derive an analytic formulation of a Bayesian model that gives rise to the best possible performance under 2D affine transformations and demonstrate that this model cannot account for human performance in 3D object discrimination. Relative to this model, human statistical efficiency is higher for novel views than for learned views, suggesting that human observers have used some 3D structural information.

Original languageEnglish (US)
Pages (from-to)2507-2519
Number of pages13
JournalVision Research
Volume38
Issue number15-16
DOIs
StatePublished - Aug 1 1998

Keywords

  • Affine transformation
  • Ideal observer
  • Object recognition
  • Object representation
  • Template matching

Fingerprint Dive into the research topics of '2D observers for human 3D object recognition?'. Together they form a unique fingerprint.

  • Cite this