RSC: Accelerate Graph Neural Networks Training via Randomized Sparse Computations

Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Training graph neural networks (GNNs) is extremely time-consuming because sparse graph-based operations are hard to be accelerated by community hardware. Prior art successfully reduces the computation cost of dense matrix based operations (e.g., convolution and linear) via sampling-based approximation. However, unlike dense matrices, sparse matrices are stored in an irregular data format such that each row/column may have a different number of non-zero entries. Thus, compared to the dense counterpart, approximating sparse operations has two unique challenges (1) we cannot directly control the efficiency of approximated sparse operation since the computation is only executed on non-zero entries; (2) sampling sparse matrices is much more inefficient due to the irregular data format. To address the issues, our key idea is to control the accuracy-efficiency trade-off by optimizing computation resource allocation layer-wisely and epoch-wisely. For the first challenge, we customize the computation resource to different sparse operations, while limiting the total used resource below a certain budget. For the second challenge, we cache previously sampled sparse matrices to reduce the epoch-wise sampling overhead. To this end, we propose Randomized Sparse Computation. In practice, RSC can achieve up to 11.6× speedup for a single sparse operation and 1.6× end-to-end wall-clock time speedup with almost no accuracy drop. Codes are available at https://github.com/warai-0toko/RSC-ICML.

Original languageEnglish (US)
Pages (from-to)21426-21449
Number of pages24
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Externally publishedYes
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

Bibliographical note

Publisher Copyright:
© 2023 Proceedings of Machine Learning Research. All rights reserved.

Fingerprint

Dive into the research topics of 'RSC: Accelerate Graph Neural Networks Training via Randomized Sparse Computations'. Together they form a unique fingerprint.

Cite this