TY - GEN
T1 - Fast parallel cosine K-nearest neighbor graph construction
AU - Anastasiu, David C.
AU - Karypis, George
PY - 2017/1/25
Y1 - 2017/1/25
N2 - The k-nearest neighbor graph is an important structure in many data mining methods for clustering, advertising, recommender systems, and outlier detection. Constructing the graph requires computing up to n2 similarities for a set of n objects. This has led researchers to seek approximate methods, which find many but not all of the nearest neighbors. In contrast, we leverage shared memory parallelism and recent advances in similarity joins to solve the problem exactly, via a filtering based approach. Our method considers all pairs of potential neighbors but quickly filters those that could not be a part of the k-nearest neighbor graph, based on similarity upper bound estimates. We evaluated our solution on several real-world datasets and found that, using 16 threads, our method achieves up to 12.9x speedup over our exact baseline and is sometimes faster even than approximate methods. Moreover, an approximate version of our method is up to 21.7x more efficient than the best approximate state-of-the-art baseline at similar high recall.
AB - The k-nearest neighbor graph is an important structure in many data mining methods for clustering, advertising, recommender systems, and outlier detection. Constructing the graph requires computing up to n2 similarities for a set of n objects. This has led researchers to seek approximate methods, which find many but not all of the nearest neighbors. In contrast, we leverage shared memory parallelism and recent advances in similarity joins to solve the problem exactly, via a filtering based approach. Our method considers all pairs of potential neighbors but quickly filters those that could not be a part of the k-nearest neighbor graph, based on similarity upper bound estimates. We evaluated our solution on several real-world datasets and found that, using 16 threads, our method achieves up to 12.9x speedup over our exact baseline and is sometimes faster even than approximate methods. Moreover, an approximate version of our method is up to 21.7x more efficient than the best approximate state-of-the-art baseline at similar high recall.
UR - http://www.scopus.com/inward/record.url?scp=85015180534&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85015180534&partnerID=8YFLogxK
U2 - 10.1109/IA3.2016.13
DO - 10.1109/IA3.2016.13
M3 - Conference contribution
AN - SCOPUS:85015180534
T3 - Proceedings of IA3 2016 - 6th Workshop on Irregular Applications: Architectures and Algorithms, Held in conjunction with SC 2016: The International Conference for High Performance Computing, Networking, Storage and Analysis
SP - 50
EP - 53
BT - Proceedings of IA3 2016 - 6th Workshop on Irregular Applications
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th Workshop on Irregular Applications: Architectures and Algorithms, IA3 2016
Y2 - 13 November 2016
ER -