3D scene modeling using sensor fusion with laser range finder and image sensor

Yunqian Ma, Zheng Wang, Mike Bazakos, Wing Au

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

Activity detection (e.g. recognizing people's behavior and intent), when used over an extended range of applications, suffers from high false detection rates. Also, activity detection limited to 2D image domain (symbolic space) is confined to qualitative activities. Symbolic features, represented by apparent dimensions, i.e. pixels, can vary with distance or viewing angle. One way to enhance performance is to work within the physical space, where object features are represented by their physical dimensions (e.g. inches or centimeters) and are invariant to distance or viewing angle. In this paper, we propose an approach to construct a 3D Site Model and co-register the video with the site model to obtain real-time physical reference at every pixel in the video. We present a unique approach that creates a 3D site model via fusion of laser range sensor and a single camera. We present experimental results to demonstrate our approach.

Original languageEnglish (US)
Title of host publicationProceedings - 34th Applied Imagery and Pattern Recognition Workshop
Subtitle of host publicationMulti-modal Imaging
Pages224-229
Number of pages6
DOIs
StatePublished - Dec 1 2005
Event34th Applied Imagery and Pattern Recognition Workshop: Multi-modal Imaging - Washington, DC, United States
Duration: Oct 19 2005Oct 21 2005

Publication series

NameProceedings - Applied Imagery Pattern Recognition Workshop
Volume2005
ISSN (Print)1550-5219

Other

Other34th Applied Imagery and Pattern Recognition Workshop: Multi-modal Imaging
CountryUnited States
CityWashington, DC
Period10/19/0510/21/05

Fingerprint Dive into the research topics of '3D scene modeling using sensor fusion with laser range finder and image sensor'. Together they form a unique fingerprint.

Cite this