Abstract
Feature tracking in video is a crucial task in computer vision. Usually, the tracking problem is handled one feature at a time, using a single-feature tracker like the Kanade-Lucas-Tomasi algorithm, or one of its derivatives. While this approach works quite well when dealing with high-quality video and 'strong' features, it often falters when faced with dark and noisy video containing low-quality features. We present a framework for jointly tracking a set of features, which enables sharing information between the different features in the scene. We show that our method can be employed to track features for both rigid and non-rigid motions (possibly of few moving bodies) even when some features are occluded. Furthermore, it can be used to significantly improve tracking results in poorly-lit scenes (where there is a mix of good and bad features). Our approach does not require direct modeling of the structure or the motion of the scene, and runs in real time on a single CPU core.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
Publisher | IEEE Computer Society |
Pages | 3454-3461 |
Number of pages | 8 |
ISBN (Electronic) | 9781479951178, 9781479951178 |
DOIs | |
State | Published - Sep 24 2014 |
Event | 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014 - Columbus, United States Duration: Jun 23 2014 → Jun 28 2014 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
ISSN (Print) | 1063-6919 |
Other
Other | 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014 |
---|---|
Country/Territory | United States |
City | Columbus |
Period | 6/23/14 → 6/28/14 |
Bibliographical note
Publisher Copyright:© 2014 IEEE.
Keywords
- Feature Tracking
- Low-Rank Regularization
- Optical Flow
- nonconvex optimization