TY - GEN
T1 - 3D scene modeling for activity detection
AU - Ma, Yunqian
AU - Bazakos, Mike
AU - Wang, Zheng
AU - Au, Wing
PY - 2005
Y1 - 2005
N2 - Current computer vision algorithms can process video sequences and perform key low-level functions, such as motion detection, motion tracking, and object classification. This motivates activity detection (e.g. recognizing people's behavior and intent), which is becoming increasingly important. However, they all have severe performance limitations when used over an extended range of applications. They suffer from high false detection rates and missing detection rates, or loss of track due to partial occlusions, etc. Also, activity detection is limited to 2D image domain and is confined to qualitative activities (such as a car entering a region of interest). Adding 3D information will increase the performance of all computer vision algorithms and the activity detection system. In this paper, we propose a unique approach which creates a 3D site model via sensor fusion of laser range finder and a single camera, which then can convert the symbolic features (pixel based) of each object to physical features (e.g. feet or yards). We present experimental results to demonstrate our 3D site model.
AB - Current computer vision algorithms can process video sequences and perform key low-level functions, such as motion detection, motion tracking, and object classification. This motivates activity detection (e.g. recognizing people's behavior and intent), which is becoming increasingly important. However, they all have severe performance limitations when used over an extended range of applications. They suffer from high false detection rates and missing detection rates, or loss of track due to partial occlusions, etc. Also, activity detection is limited to 2D image domain and is confined to qualitative activities (such as a car entering a region of interest). Adding 3D information will increase the performance of all computer vision algorithms and the activity detection system. In this paper, we propose a unique approach which creates a 3D site model via sensor fusion of laser range finder and a single camera, which then can convert the symbolic features (pixel based) of each object to physical features (e.g. feet or yards). We present experimental results to demonstrate our 3D site model.
UR - http://www.scopus.com/inward/record.url?scp=33646736005&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33646736005&partnerID=8YFLogxK
U2 - 10.1007/11568346_32
DO - 10.1007/11568346_32
M3 - Conference contribution
AN - SCOPUS:33646736005
SN - 3540293957
SN - 9783540293958
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 300
EP - 309
BT - Perspectives in Conceptual Modeling - ER 2005 Workshops CAOIS, BP-UML, CoMoGIS, eCOMO, and QoIS, Proceedings
T2 - ER 2005 Workshops CAOIS, BP-UML, CoMoGIS, eCOMO, and QoIS - Perspectives in Conceptual Modeling
Y2 - 24 October 2005 through 28 October 2005
ER -