Seeing what we touch: Recalibrating the use of visual information through haptic feedback

Research output: Contribution to journalArticlepeer-review


When we interact with the world, direct haptic contact provides many opportunities for visuomotor learning. In particular, haptic feedback could be used to correct assumptions that the visual system must make in the presence of ambiguous image information, and thus to recalibrate the visual system's use of information. However, past research has shown that whenever haptic and visual information conflict, visual information dominates haptic information unless substantially degraded. This dominance could be due to the fact that the visual information in these studies was unambiguous and more reliable than the haptic information, potentially rendering an effect of haptic feedback unmeasurable. The purpose of the present study was to assess the role of haptic feedback on the use of visual information using a more sensitive cue combination paradigm. Observers reached and grasped a real object (a 6cm square) positioned using a robot arm while viewing an animated version of the square presented in a simple virtual reality environment. The visual display provided only two cues to the square's depth, the image size of the square and cast shadow displacement. Previous work using a perceptual task has shown that observers form an estimate of object depth from these two cues using a combination rule that weights the cues by their intrinsic reliability (Schrater & Kersten , 2000). By varying the relationship between the depth of the real square and the cues in the animation, we could change the haptic feedback to be consistent either with image size, shadow displacement, or both cues. The results show that haptic feedback causes a dramatic and complete re-weighting of these two visual cues on a trial by trial basis, in favor of the haptically reinforced cue. Thus, haptic feedback can recalibrate the use of visual information. The results will be discussed in the context of a Bayesian model for reaching.

Original languageEnglish (US)
Pages (from-to)481a
JournalJournal of vision
Issue number3
StatePublished - 2001


Dive into the research topics of 'Seeing what we touch: Recalibrating the use of visual information through haptic feedback'. Together they form a unique fingerprint.

Cite this