Applying the controlled active vision framework to the visual servoing problem

Nikolaos P Papanikolopoulos, Pradeep K. Khosla

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Our previous work has introduced the framework of Controlled Active Vision that provides a bridge between Computer Vision and Control. This paper presents some of the vision techniques that were employed in order to automatically select features, measure features' displacements, and evaluate measurements during the various applications of the Controlled Active Vision framework. We experimented with a lot of different techniques, but the most robust proved to be the Sum-of-Squared Differences (SSD) optical flow technique. In addition, several techniques for the evaluation of the measurements are presented. One important characteristic of these techniques is that they can also be used for the selection of features for tracking in conjunction with several numerical criteria that guarantee the robustness of the servoing. These techniques are important aspects of our work since they can be implemented either on-line of off-line. An extension of the SSD measure to color images is presented and the results from the application of these techniques to real images are discussed. Finally, the derivation of depth maps through the controlled motion of the hand-eye system is outlined.

Original languageEnglish (US)
Title of host publicationAmerican Control Conference
Editors Anon
PublisherPubl by IEEE
Pages1332-1338
Number of pages7
ISBN (Print)0780308611
StatePublished - Dec 1 1993
EventProceedings of the 1993 American Control Conference Part 3 (of 3) - San Francisco, CA, USA
Duration: Jun 2 1993Jun 4 1993

Publication series

NameAmerican Control Conference

Other

OtherProceedings of the 1993 American Control Conference Part 3 (of 3)
CitySan Francisco, CA, USA
Period6/2/936/4/93

Fingerprint Dive into the research topics of 'Applying the controlled active vision framework to the visual servoing problem'. Together they form a unique fingerprint.

Cite this