Abstract
Activity detection (e.g. recognizing people's behavior and intent), when used over an extended range of applications, suffers from high false detection rates. Also, activity detection limited to 2D image domain (symbolic space) is confined to qualitative activities. Symbolic features, represented by apparent dimensions, i.e. pixels, can vary with distance or viewing angle. One way to enhance performance is to work within the physical space, where object features are represented by their physical dimensions (e.g. inches or centimeters) and are invariant to distance or viewing angle. In this paper, we propose an approach to construct a 3D Site Model and co-register the video with the site model to obtain real-time physical reference at every pixel in the video. We present a unique approach that creates a 3D site model via fusion of laser range sensor and a single camera. We present experimental results to demonstrate our approach.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 34th Applied Image Pattern Recognition Workshop, AIPR 2005 |
Subtitle of host publication | Multi-modal Imaging |
Editors | Robert J. Bonneau |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 224-229 |
Number of pages | 6 |
ISBN (Electronic) | 0769524796, 9780769524795 |
ISBN (Print) | 0769524796, 9780769524795 |
DOIs | |
State | Published - Dec 1 2005 |
Event | 34th Applied Image Pattern Recognition Workshop, AIPR 2005 - Washington, United States Duration: Oct 19 2005 → Oct 21 2005 |
Publication series
Name | Proceedings - Applied Imagery Pattern Recognition Workshop |
---|---|
Volume | 2005 |
ISSN (Print) | 1550-5219 |
Conference
Conference | 34th Applied Image Pattern Recognition Workshop, AIPR 2005 |
---|---|
Country/Territory | United States |
City | Washington |
Period | 10/19/05 → 10/21/05 |
Bibliographical note
Publisher Copyright:© 2005 IEEE.