Abstract
This paper describes an implemented robotic agent architecture in which the environment, as sensed by the agent, is used to guide the recognition of spoken and gestural directives given by a human user. The agent recognizes these directives using a probabilistic language model that conditions probability estimates for possible directives on visually-, proprioceptively-, or otherwise-sensed properties of entities in its environment, and updates these probabilities when these properties change. The result is an agent that can discriminate against mis-recognized directives that do not 'make sense' in its representation of the current state of the world.
Original language | English (US) |
---|---|
Pages | 193-199 |
Number of pages | 7 |
State | Published - 2005 |
Event | 4th International Conference on Autonomous Agents and Multi agent Systems, AAMAS 05 - Utrecht, Netherlands Duration: Jul 25 2005 → Jul 29 2005 |
Other
Other | 4th International Conference on Autonomous Agents and Multi agent Systems, AAMAS 05 |
---|---|
Country/Territory | Netherlands |
City | Utrecht |
Period | 7/25/05 → 7/29/05 |
Keywords
- Language modeling
- Multimodal interfaces
- Robotics
- Sensor fusion
- Spoken language interfaces