We present a novel method enabling robots to quickly learn to manipulate objects by leveraging a motion planner to generate 'expert' training trajectories from a small amount of human-labeled data. In contrast to the traditional sense-plan-act cycle, we propose a deep learning architecture and training regimen called PtPNet that can estimate effective end-effector trajectories for manipulation directly from a single RGB-D image of an object. Additionally, we present a data collection and augmentation pipeline that enables the automatic generation of large numbers (millions) of training image and trajectory examples with almost no human labeling effort.We demonstrate our approach in a non-prehensile tool-based manipulation task, specifically picking up shoes with a hook. In hardware experiments, PtPNet generates motion plans (open-loop trajectories) that reliably (89% success over 189 trials) pick up four very different shoes from a range of positions and orientations, and reliably picks up a shoe it has never seen before. Compared with a traditional sense-plan-act paradigm, our system has the advantages of operating on sparse information (single RGB-D frame), producing high-quality trajectories much faster than the expert planner (300ms versus several seconds), and generalizing effectively to previously unseen shoes. Video available at https://youtu.be/voIkyiBtwn4.
|Original language||English (US)|
|Title of host publication||2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||8|
|State||Published - Nov 2019|
|Event||2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 - Macau, China|
Duration: Nov 3 2019 → Nov 8 2019
|Name||IEEE International Conference on Intelligent Robots and Systems|
|Conference||2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019|
|Period||11/3/19 → 11/8/19|
Bibliographical notePublisher Copyright:
© 2019 IEEE.