Computer vision issues during eye-in-hand robotic tasks

Nikolaos P. Papanikolopoulos, Christopher E. Smith

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Our previous work has introduced the framework of Controlled Active Vision that provides a bridge between computer vision and control theory. This paper presents some of the computer vision techniques that were employed in order to detect targets, automatically select features, measure features' displacements, and evaluate measurements during the various applications of the Controlled Active Vision framework. We experimented with a lot of different techniques, but the most robust proved to be the Sum-of-Squared Differences (SSD) optical flow technique. Search optimizations and dynamic pyramiding are proposed in order to provide real-time performance. In addition, several techniques for the evaluation of the measurements are presented. One important characteristic of these techniques is that they can also be used for the selection of features for tracking in conjunction with several numerical criteria that guarantee the robustness of the servoing. These techniques are important aspects of our work since they can be implemented either on-line or off-line. The paper concludes with results from the application of these techniques to real images.

Original languageEnglish (US)
Title of host publicationProceedings - IEEE International Conference on Robotics and Automation
Pages2989-2994
Number of pages6
DOIs
StatePublished - 1995
EventProceedings of the 1995 IEEE International Conference on Robotics and Automation. Part 1 (of 3) - Nagoya, Jpn
Duration: May 21 1995May 27 1995

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
Volume3
ISSN (Print)1050-4729

Other

OtherProceedings of the 1995 IEEE International Conference on Robotics and Automation. Part 1 (of 3)
CityNagoya, Jpn
Period5/21/955/27/95

Fingerprint Dive into the research topics of 'Computer vision issues during eye-in-hand robotic tasks'. Together they form a unique fingerprint.

Cite this