Large scale 3D image localization requires computationally expensive matching between 2D feature points in the query image and a 3D point cloud. In this paper, we present a method to accelerate the matching process and to reduce the memory footprint by analyzing the view-statistics of points in a training corpus. Given a training image set that is representative of common views of a scene, our approach identifies a compact subset of the 3D point cloud for efficient localization, while achieving comparable localization performance to using the full 3D point cloud. We demonstrate that the problem can be precisely formulated as a mixed-integer quadratic program and present a pointwise descriptor calibration process to improve matching. We show that our algorithm outperforms the state-of-theart greedy algorithm on standard datasets, on measures of both point-cloud compression and localization accuracy.