First-person action-object detection with EgoNet

Gedas Bertasius, Hyun Soo Park, Stella X. Yu, Jianbo Shi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects-the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the actionobjects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.

Original languageEnglish (US)
Title of host publicationRobotics
Subtitle of host publicationScience and Systems XIII, RSS 2017
EditorsSiddhartha Srinivasa, Nora Ayanian, Nancy Amato, Scott Kuindersma
PublisherMIT Press Journals
ISBN (Electronic)9780992374730
StatePublished - Jan 1 2017
Event2017 Robotics: Science and Systems, RSS 2017 - Cambridge, United States
Duration: Jul 12 2017Jul 16 2017

Publication series

NameRobotics: Science and Systems
Volume13
ISSN (Electronic)2330-765X

Other

Other2017 Robotics: Science and Systems, RSS 2017
CountryUnited States
CityCambridge
Period7/12/177/16/17

Fingerprint

Cameras
Robots
Spatial distribution
Pixels
Object detection

Cite this

Bertasius, G., Park, H. S., Yu, S. X., & Shi, J. (2017). First-person action-object detection with EgoNet. In S. Srinivasa, N. Ayanian, N. Amato, & S. Kuindersma (Eds.), Robotics: Science and Systems XIII, RSS 2017 (Robotics: Science and Systems; Vol. 13). MIT Press Journals.

First-person action-object detection with EgoNet. / Bertasius, Gedas; Park, Hyun Soo; Yu, Stella X.; Shi, Jianbo.

Robotics: Science and Systems XIII, RSS 2017. ed. / Siddhartha Srinivasa; Nora Ayanian; Nancy Amato; Scott Kuindersma. MIT Press Journals, 2017. (Robotics: Science and Systems; Vol. 13).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Bertasius, G, Park, HS, Yu, SX & Shi, J 2017, First-person action-object detection with EgoNet. in S Srinivasa, N Ayanian, N Amato & S Kuindersma (eds), Robotics: Science and Systems XIII, RSS 2017. Robotics: Science and Systems, vol. 13, MIT Press Journals, 2017 Robotics: Science and Systems, RSS 2017, Cambridge, United States, 7/12/17.
Bertasius G, Park HS, Yu SX, Shi J. First-person action-object detection with EgoNet. In Srinivasa S, Ayanian N, Amato N, Kuindersma S, editors, Robotics: Science and Systems XIII, RSS 2017. MIT Press Journals. 2017. (Robotics: Science and Systems).
Bertasius, Gedas ; Park, Hyun Soo ; Yu, Stella X. ; Shi, Jianbo. / First-person action-object detection with EgoNet. Robotics: Science and Systems XIII, RSS 2017. editor / Siddhartha Srinivasa ; Nora Ayanian ; Nancy Amato ; Scott Kuindersma. MIT Press Journals, 2017. (Robotics: Science and Systems).
@inproceedings{ff655e1db3b241fb8c6bc65a0742ad11,
title = "First-person action-object detection with EgoNet",
abstract = "Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects-the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the actionobjects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.",
author = "Gedas Bertasius and Park, {Hyun Soo} and Yu, {Stella X.} and Jianbo Shi",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
series = "Robotics: Science and Systems",
publisher = "MIT Press Journals",
editor = "Siddhartha Srinivasa and Nora Ayanian and Nancy Amato and Scott Kuindersma",
booktitle = "Robotics",

}

TY - GEN

T1 - First-person action-object detection with EgoNet

AU - Bertasius, Gedas

AU - Park, Hyun Soo

AU - Yu, Stella X.

AU - Shi, Jianbo

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects-the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the actionobjects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.

AB - Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects-the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the actionobjects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.

UR - http://www.scopus.com/inward/record.url?scp=85041923168&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041923168&partnerID=8YFLogxK

M3 - Conference contribution

T3 - Robotics: Science and Systems

BT - Robotics

A2 - Srinivasa, Siddhartha

A2 - Ayanian, Nora

A2 - Amato, Nancy

A2 - Kuindersma, Scott

PB - MIT Press Journals

ER -