TY - JOUR
T1 - Learning Object Grasping for Soft Robot Hands
AU - Choi, Changhyun
AU - Schwarting, Wilko
AU - Delpreto, Joseph
AU - Rus, Daniela
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2018/7
Y1 - 2018/7
N2 - We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, but come at the cost of unpredictable deformation of the soft fingers. Traditional model-driven grasping approaches, which assume known models for objects, robot hands, and stable grasps with expected contacts, are inapplicable to such soft hands, since predicting contact points between objects and soft hands is not straightforward. Our solution adopts a deep CNN approach to find good caging grasps for previously unseen objects by learning effective features and a classifier from point cloud data. Unlike recent CNN models applied to robotic grasping which have been trained on 2D or 2.5D images and limited to a fixed top grasping direction, we exploit the power of a 3D CNN model to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks. Our soft hands guided by the 3D CNN algorithm show 87% successful grasping on previously unseen objects. A set of comparative evaluations shows the robustness of our approach with respect to noise and occlusions.
AB - We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, but come at the cost of unpredictable deformation of the soft fingers. Traditional model-driven grasping approaches, which assume known models for objects, robot hands, and stable grasps with expected contacts, are inapplicable to such soft hands, since predicting contact points between objects and soft hands is not straightforward. Our solution adopts a deep CNN approach to find good caging grasps for previously unseen objects by learning effective features and a classifier from point cloud data. Unlike recent CNN models applied to robotic grasping which have been trained on 2D or 2.5D images and limited to a fixed top grasping direction, we exploit the power of a 3D CNN model to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks. Our soft hands guided by the 3D CNN algorithm show 87% successful grasping on previously unseen objects. A set of comparative evaluations shows the robustness of our approach with respect to noise and occlusions.
KW - Perception for grasping and manipulation
KW - deep learning in robotics and automation
UR - http://www.scopus.com/inward/record.url?scp=85058371147&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85058371147&partnerID=8YFLogxK
U2 - 10.1109/LRA.2018.2810544
DO - 10.1109/LRA.2018.2810544
M3 - Article
AN - SCOPUS:85058371147
SN - 2377-3766
VL - 3
SP - 2370
EP - 2377
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 3
ER -