We present an approach for detecting potentially unsafe commands in human-robot dialog, where a robotic system evaluates task cost in input commands to ask input-specific, directed questions to ensure safe task execution. The goal is to reduce risk, both to the robot and the environment, by asking context-Appropriate questions. Given an input program, (i.e., a sequence of commands) the system evaluates a set of likely alternate programs along with their likelihood and cost, and these are given as input to a Decision Function to decide whether to execute the task or confirm the plan from the human partner. A process called token-risk grounding identifies the costly commands in the programs, and specifically asks the human user to clarify those commands. We evaluate our system in two simulated robot tasks, and also on-board the Willow Garage PR2 and TurtleBot robots in an indoor task setting. In both sets of evaluations, the results show that the system is able to identify specific commands that contribute to high task cost, and present users the option to either confirm or modify those commands. In addition to ensuring task safety, this results in an overall reduction in robot reprogramming time.
|Original language||English (US)|
|Number of pages||7|
|Journal||Proceedings - IEEE International Conference on Robotics and Automation|
|State||Published - Sep 22 2014|
|Event||2014 IEEE International Conference on Robotics and Automation, ICRA 2014 - Hong Kong, China|
Duration: May 31 2014 → Jun 7 2014