Egocentric distance judgments in full-cue video-see-through VR conditions are no better than distance judgments to targets in a void

Koorosh Vaziri, Maria Bondy, Amanda Bui, Victoria Interrante

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Understanding the extent to which, and conditions under which, scene detail affects spatial perception accuracy can inform the responsible use of sketch-like rendering styles in applications such as immersive architectural design walkthroughs using 3D concept drawings. This paper reports the results of an experiment that provides important new insight into this question using a custom-built, portable video-see-through (VST) conversion of an optical-see-through head-mounted display (HMD). Participants made egocentric distance judgments by blind walking to the perceived location of a real physical target in a real-world outdoor environment under three different conditions of HMD-mediated scene detail reduction: full detail (raw camera view), partial detail (Sobel-filtered camera view), and no detail (complete background subtraction), and in a control condition of unmediated real world viewing through the same HMD. Despite the significant differences in participants' ratings of visual and experiential realism between the three different video-see-through rendering conditions, we found no significant difference in the distances walked between these conditions. Consistent with prior findings, participants underestimated distances to a significantly greater extent in each of the three VST conditions than in the real world condition. The lack of any clear penalty to task performance accuracy not only from the removal of scene detail, but also from the removal of all contextual cues to the target location, suggests that participants may be relying nearly exclusively on context - independent information such as angular declination when performing the blind-walking task. This observation highlights the limitations in using blind walking to the perceived location of a target on the ground to make inferences about people's understanding of the 3D space of the virtual environment surrounding the target. For applications like immersive architectural design, where we seek to verify the equivalence of the 3D spatial understanding derived from virtual immersion and real world experience, additional measures of spatial understanding should be considered.

Original languageEnglish (US)
Title of host publicationProceedings - 2021 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages336-344
Number of pages9
ISBN (Electronic)9780738125565
DOIs
StatePublished - Mar 1 2021
Externally publishedYes
Event28th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021 - Virtual, Lisboa, Portugal
Duration: Mar 27 2021Apr 3 2021

Publication series

Name2021 IEEE Virtual Reality and 3D User Interfaces (VR)

Conference

Conference28th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021
Country/TerritoryPortugal
CityVirtual, Lisboa
Period3/27/214/3/21

Bibliographical note

Funding Information:
This research was supported by the National Science Foundation through grants II-NEW 1305401 and CHS: Small 1526693, and by the Linda and Ted Johnson Digital Design Consortium Endowment.

Publisher Copyright:
© 2021 IEEE.

Keywords

  • Non-photorealistic rendering
  • Spatial perception
  • Virtual reality

Fingerprint

Dive into the research topics of 'Egocentric distance judgments in full-cue video-see-through VR conditions are no better than distance judgments to targets in a void'. Together they form a unique fingerprint.

Cite this