Abstract
Large-scale 3D image localization requires computationally expensive matching between 2D feature points in the query image and a 3D point cloud. In this chapter, we present a method to accelerate the matching process and reduce the memory footprint by analyzing the view statistics of points in a training corpus. Given a training image set that is representative of common views of a scene, our approach identifies a compact subset of the 3D point cloud for efficient localization, while achieving comparable localization performance to using the full 3D point cloud. We demonstrate that the problem can be precisely formulated as a mixed-integer quadratic program and present a point-wise descriptor calibration process to improve matching. We show that our algorithm outperforms the state-of-the-art greedy algorithm on standard datasets, on measures of both point-cloud compression and localization accuracy.
Original language | English (US) |
---|---|
Title of host publication | Advances in Computer Vision and Pattern Recognition |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 189-202 |
Number of pages | 14 |
DOIs | |
State | Published - 2016 |
Externally published | Yes |
Publication series
Name | Advances in Computer Vision and Pattern Recognition |
---|---|
ISSN (Print) | 2191-6586 |
ISSN (Electronic) | 2191-6594 |
Bibliographical note
Publisher Copyright:© Springer International Publishing Switzerland 2016.