Abstract
Our goal is to enable robots to understand or “ground” natural language instructions in the context of their perceived workspace. Contemporary models learn a probabilistic correspondence between input phrases and semantic concepts (or groundings) such as objects, regions or goals for robot motion derived from the robot’s world model. Crucially, these models assume a fixed and a priori known set of object types as well as phrases and train probable correspondences offline using static language-workspace corpora. Hence, model inference fails when an input command contains unknown phrases or references to novel object types that were not seen during the training. We introduce a probabilistic model that incorporates a notion of unknown groundings and learns a correspondence between an unknown phrase and an unknown object that cannot be classified into known visual categories. Further, we extend the model to “hypothesize” known or unknown object groundings in case the language utterance references an object that exists beyond the robot’s partial view of its workspace. When the grounding for an instruction is unknown or hypothetical, the robot performs exploratory actions to gather new observations and find the referenced objects beyond the current view. Once an unknown grounding is associated with percepts of a new object, the model is adapted and trained online using accrued visual-linguistic observations to reflect the new knowledge gained for interpreting future utterances. We evaluate the model quantitatively using a corpus from a user study and report experiments on a mobile platform in a workspace populated with objects from a standardized dataset. A video of the experimental demonstration is available at: https://youtu.be/XFLNdaUKgW0.
Original language | English (US) |
---|---|
Title of host publication | Springer Proceedings in Advanced Robotics |
Publisher | Springer Science and Business Media B.V. |
Pages | 317-333 |
Number of pages | 17 |
DOIs | |
State | Published - 2020 |
Publication series
Name | Springer Proceedings in Advanced Robotics |
---|---|
Volume | 10 |
ISSN (Print) | 2511-1256 |
ISSN (Electronic) | 2511-1264 |
Bibliographical note
Funding Information:Authors thank support in part from the U.S. Army Research Laboratory under the RCTA program, the National Science Foundation. G. J. Stein acknowledges support by a NDSEG Graduate Fellowship.
Publisher Copyright:
© 2020, Springer Nature Switzerland AG.