Using vision-based control techniques for grasping objects

Christopher E. Smith, Nikolaos P Papanikolopoulos

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

We present additions to the Controlled Active Vision framework that focus upon the autonomous grasping of a moving object in the manipulator's workspace. Our work extends the capabilities of an eye-in-hand robotic system beyond those as a 'pointer' or a 'camera orienter' to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a moving object. The system grasps objects in the manipulator's workspace without requiring the object to follow a specific trajectory and without requiring the object to maintain a specific orientation.

Original languageEnglish (US)
Pages (from-to)4434-4439
Number of pages6
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume5
StatePublished - Dec 1 1995
EventProceedings of the 1995 IEEE International Conference on Systems, Man and Cybernetics. Part 2 (of 5) - Vancouver, BC, Can
Duration: Oct 22 1995Oct 25 1995

Fingerprint

Manipulators
End effectors
Robotics
Cameras
Trajectories
Uncertainty

Cite this

Using vision-based control techniques for grasping objects. / Smith, Christopher E.; Papanikolopoulos, Nikolaos P.

In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Vol. 5, 01.12.1995, p. 4434-4439.

Research output: Contribution to journalConference article

@article{58fe56d2c23c41cc9eec1724f8ed4071,
title = "Using vision-based control techniques for grasping objects",
abstract = "We present additions to the Controlled Active Vision framework that focus upon the autonomous grasping of a moving object in the manipulator's workspace. Our work extends the capabilities of an eye-in-hand robotic system beyond those as a 'pointer' or a 'camera orienter' to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a moving object. The system grasps objects in the manipulator's workspace without requiring the object to follow a specific trajectory and without requiring the object to maintain a specific orientation.",
author = "Smith, {Christopher E.} and Papanikolopoulos, {Nikolaos P}",
year = "1995",
month = "12",
day = "1",
language = "English (US)",
volume = "5",
pages = "4434--4439",
journal = "Proceedings of the IEEE International Conference on Systems, Man and Cybernetics",
issn = "0884-3627",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Using vision-based control techniques for grasping objects

AU - Smith, Christopher E.

AU - Papanikolopoulos, Nikolaos P

PY - 1995/12/1

Y1 - 1995/12/1

N2 - We present additions to the Controlled Active Vision framework that focus upon the autonomous grasping of a moving object in the manipulator's workspace. Our work extends the capabilities of an eye-in-hand robotic system beyond those as a 'pointer' or a 'camera orienter' to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a moving object. The system grasps objects in the manipulator's workspace without requiring the object to follow a specific trajectory and without requiring the object to maintain a specific orientation.

AB - We present additions to the Controlled Active Vision framework that focus upon the autonomous grasping of a moving object in the manipulator's workspace. Our work extends the capabilities of an eye-in-hand robotic system beyond those as a 'pointer' or a 'camera orienter' to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a moving object. The system grasps objects in the manipulator's workspace without requiring the object to follow a specific trajectory and without requiring the object to maintain a specific orientation.

UR - http://www.scopus.com/inward/record.url?scp=0029481589&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029481589&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:0029481589

VL - 5

SP - 4434

EP - 4439

JO - Proceedings of the IEEE International Conference on Systems, Man and Cybernetics

JF - Proceedings of the IEEE International Conference on Systems, Man and Cybernetics

SN - 0884-3627

ER -