Human efficiency for recognizing 3-D objects in luminance noise

Bosco S. Tjan, Wendy L. Braje, Gordon E. Legge, Daniel Kersten

Research output: Contribution to journalArticlepeer-review

135 Scopus citations

Abstract

The purpose of this study was to establish how efficiently humans use visual information to recognize simple 3-D objects. The stimuli were computer-rendered images of four simple 3-D objects-wedge, cone, cylinder, and pyramid-each rendered from 8 randomly chosen viewing positions as shaded objects, line drawings, or silhouettes. The objects were presented in static, 2-D Gaussian luminance noise. The observer's task was to indicate which of the four objects had been presented. We obtained human contrast thresholds for recognition, and compared these to an ideal observer's thresholds to obtain efficiencies. In two auxiliary experiments, we measured efficiencies for object detection and letter recognition. Our results showed that human object-recognition efficiency is low (3-8%) when compared to efficiencies reported for some other visual-information processing tasks. The low efficiency means that human recognition performance is limited primarily by factors intrinsic to the observer rather than the information content of the stimuli. We found three factors that play a large role in accounting for low object-recognition efficiency: stimulus size, spatial uncertainty, and detection efficiency. Four other factors play a smaller role in limiting object-recognition efficiency: observers' internal noise, stimulus rendering condition, stimulus familiarity, and categorization across views.

Original languageEnglish (US)
Pages (from-to)3053-3069
Number of pages17
JournalVision Research
Volume35
Issue number21
DOIs
StatePublished - Nov 1995

Keywords

  • Efficiency
  • Ideal observer
  • Letter recognition
  • Object detection
  • Object recognition

Fingerprint Dive into the research topics of 'Human efficiency for recognizing 3-D objects in luminance noise'. Together they form a unique fingerprint.

Cite this