Eye-in-hand robotic tasks in uncalibrated environments

Christopher E. Smith, Scott A. Brandt, Nikolaos P Papanikolopoulos

Research output: Contribution to journalArticlepeer-review

28 Scopus citations


Flexible operation of a robotic agent in an uncalibrated environment requires the ability to recover unknown or partially known parameters of the workspace through sensing. Of the sensors available to a robotic agent, visual sensors provide information that is richer and more complete than other sensors. In this paper we present robust techniques for the derivation of depth from feature points on a target's surface and for the accurate and high-speed tracking of moving targets. We use these techniques in a system that operates with little or no a priori knowledge of object- and camera-related parameters to robustly determine such object-related parameters as velocity and depth. Such determination of extrinsic environmental parameters is essential for performing higher level tasks such as inspection, exploration, tracking, grasping, and collision-free motion planning. For both applications, we use the Minnesota robotic visual tracker (MRVT) (a single visual sensor mounted on the end-effector of a robotic manipulator combined with a real-time vision system) to automatically select feature points on surfaces, to derive an estimate of the environmental parameter in question, and to supply a control vector based upon these estimates to guide the manipulator.

Original languageEnglish (US)
Pages (from-to)903-914
Number of pages12
JournalIEEE Transactions on Robotics and Automation
Issue number6
StatePublished - 1997


  • Active and real-time vision
  • Experimental computer vision
  • Systems and applications
  • Vision-guided robotics


Dive into the research topics of 'Eye-in-hand robotic tasks in uncalibrated environments'. Together they form a unique fingerprint.

Cite this