Wayfinding with words: Spatial learning and navigation using dynamically updated verbal descriptions

Nicholas A. Giudice, Jonathan Z. Bakdash, Gordon E. Legge

Research output: Contribution to journalReview articlepeer-review

55 Scopus citations


This work investigates whether large-scale indoor layouts can be learned and navigated non-visually, using verbal descriptions of layout geometry that are updated, e.g. contingent on a participant's location in a building. In previous research, verbal information has been used to facilitate route following, not to support free exploration and wayfinding. Our results with blindfolded-sighted participants demonstrate that accurate learning and wayfinding performance is possible using verbal descriptions and that it is sufficient to describe only local geometric detail. In addition, no differences in learning or navigation performance were observed between the verbal study and a control study using visual input. Verbal learning was also compared to the performance of a random walk model, demonstrating that human search behavior is not based on chance decision-making. However, the model performed more like human participants after adding a constraint that biased it against reversing direction.

Original languageEnglish (US)
Pages (from-to)347-358
Number of pages12
JournalPsychological Research
Issue number3
StatePublished - May 2007

Bibliographical note

Funding Information:
Acknowledgments This research was supported by NIDRR grant H133A011903 and NIH training grant 5T32 EY07133.


Dive into the research topics of 'Wayfinding with words: Spatial learning and navigation using dynamically updated verbal descriptions'. Together they form a unique fingerprint.

Cite this