TY - GEN
T1 - First-person action-object detection with EgoNet
AU - Bertasius, Gedas
AU - Park, Hyun Soo
AU - Yu, Stella X.
AU - Shi, Jianbo
N1 - Publisher Copyright:
© 2017 MIT Press Journals. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2017
Y1 - 2017
N2 - Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects-the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the actionobjects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.
AB - Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects-the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the actionobjects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.
UR - http://www.scopus.com/inward/record.url?scp=85041923168&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041923168&partnerID=8YFLogxK
U2 - 10.15607/rss.2017.xiii.012
DO - 10.15607/rss.2017.xiii.012
M3 - Conference contribution
AN - SCOPUS:85041923168
T3 - Robotics: Science and Systems
BT - Robotics
A2 - Amato, Nancy
A2 - Srinivasa, Siddhartha
A2 - Ayanian, Nora
A2 - Kuindersma, Scott
PB - MIT Press Journals
T2 - 2017 Robotics: Science and Systems, RSS 2017
Y2 - 12 July 2017 through 16 July 2017
ER -