One of the most desirable characteristics of a robotic manipulator is its flexibility. Flexibility and adaptability can be achieved by incorporating vision and generally, sensory information in the feedback loop. Our research introduces a framework called controlled active vision for efficient integration of the vision sensor in the feedback loop. This framework was applied to the problem of robotic visual tracking and servoing, and the results were very promising. Full 3-D robotic visual tracking was achieved at rates of 30 Hz. Most importantly, the tracking was successful even under the assumption of poor calibration of the eye-in-bond system. This paper extends this framework to other problems of sensor-based robotics, such as the derivation of depth maps from controlled motion; the vision-assisted grasping; the active calibration of the system robot-camera; and the computation of the relative pose of the target with respect to the camera. We address these problems by combining adaptive control techniques with computer vision algorithms. The paper concludes with a discussion on several relative issues such as the stability and robustness of the proposed algorithms and the problem of incorporating stereo information in the existing algorithms in order to increase the accuracy of the estimated depth.
Bibliographical noteFunding Information:
This work has been supported by the Department of Energy (Sandia National Laboratories) through Contracts #AG3752D and #AL-3921, the National Science Foundation through Contracts #IBI-9502245 and #IFU-9419693, the Minnesota Department of Transportation through Contracts #71789729X%169 and #71789-72447-159, the Center for Transportation Studies through Contract #USDOT/DTBS 93-G-9617-01, the McKnight Land-Grant Professorship Program at the University of Minnesota, the Graduate School of the University of Minnesota, and the Department of Computer Science at the University of Minnesota.
- Active vision
- Depth derivation
- Optical flow
- Robotic visual tracking