Spatial learning and navigation using: A virtual verbal display

Nicholas A. Giudice, Jonathan Z. Bakdash, Gordon E Legge, Rudrava Roy

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

We report on three experiments that investigate the efficacy of a new type of interface called a virtual verbal display (VVD) for nonvisual learning and navigation of virtual environments (VEs). Although verbal information has been studied for routeguidance, little is known about the use of context-sensitive, speech-based displays (e.g., the VVD) for supporting free exploration and wayfinding behavior. During training, participants used the VVD (Experiments I and II) or a visual display (Experiment III) to search the VEs and find four hidden target locations. At test, all participants performed a route-finding task in the corresponding real environment, navigating with vision (Experiments I and III) or from verbal descriptions (Experiment II). Training performance between virtual display modes was comparable, but wayfinding in the real environment was worse after VVD learning than visual learning, regardless of the testing modality. Our results support the efficacy of the VVD for searching computer-based environments but indicate a difference in the cognitive maps built up between verbal and visual learning, perhaps due to lack of physical movement in the VVD.

Original languageEnglish (US)
Article number3
JournalACM Transactions on Applied Perception
Volume7
Issue number1
DOIs
StatePublished - Jan 2010

Keywords

  • Human-computer interaction
  • Navigation
  • Verbal learning
  • Virtual environments
  • Virtual verbal display
  • Wayfinding

Fingerprint

Dive into the research topics of 'Spatial learning and navigation using: A virtual verbal display'. Together they form a unique fingerprint.

Cite this