TY - JOUR
T1 - Ensuring safety in human-robot dialog - A cost-directed approach
AU - Sattar, Junaed
AU - Little, James J.
PY - 2014/9/22
Y1 - 2014/9/22
N2 - We present an approach for detecting potentially unsafe commands in human-robot dialog, where a robotic system evaluates task cost in input commands to ask input-specific, directed questions to ensure safe task execution. The goal is to reduce risk, both to the robot and the environment, by asking context-Appropriate questions. Given an input program, (i.e., a sequence of commands) the system evaluates a set of likely alternate programs along with their likelihood and cost, and these are given as input to a Decision Function to decide whether to execute the task or confirm the plan from the human partner. A process called token-risk grounding identifies the costly commands in the programs, and specifically asks the human user to clarify those commands. We evaluate our system in two simulated robot tasks, and also on-board the Willow Garage PR2 and TurtleBot robots in an indoor task setting. In both sets of evaluations, the results show that the system is able to identify specific commands that contribute to high task cost, and present users the option to either confirm or modify those commands. In addition to ensuring task safety, this results in an overall reduction in robot reprogramming time.
AB - We present an approach for detecting potentially unsafe commands in human-robot dialog, where a robotic system evaluates task cost in input commands to ask input-specific, directed questions to ensure safe task execution. The goal is to reduce risk, both to the robot and the environment, by asking context-Appropriate questions. Given an input program, (i.e., a sequence of commands) the system evaluates a set of likely alternate programs along with their likelihood and cost, and these are given as input to a Decision Function to decide whether to execute the task or confirm the plan from the human partner. A process called token-risk grounding identifies the costly commands in the programs, and specifically asks the human user to clarify those commands. We evaluate our system in two simulated robot tasks, and also on-board the Willow Garage PR2 and TurtleBot robots in an indoor task setting. In both sets of evaluations, the results show that the system is able to identify specific commands that contribute to high task cost, and present users the option to either confirm or modify those commands. In addition to ensuring task safety, this results in an overall reduction in robot reprogramming time.
UR - http://www.scopus.com/inward/record.url?scp=84911495231&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84911495231&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2014.6907842
DO - 10.1109/ICRA.2014.6907842
M3 - Conference article
AN - SCOPUS:84911495231
SN - 1050-4729
SP - 6660
EP - 6666
JO - Proceedings - IEEE International Conference on Robotics and Automation
JF - Proceedings - IEEE International Conference on Robotics and Automation
M1 - 6907842
T2 - 2014 IEEE International Conference on Robotics and Automation, ICRA 2014
Y2 - 31 May 2014 through 7 June 2014
ER -