Abstract
Memory-based Temporal Graph Neural Networks are powerful tools in dynamic graph representation learning and have demon-strated superior performance in many real-world applications. How-ever, their node memory favors smaller batch sizes to capture more dependencies in graph events and needs to be maintained synchronously across all trainers. As a result, existing frameworks suffer from accuracy loss when scaling to multiple GPUs. Even worse, the tremendous overhead of synchronizing the node memory makes it impractical to deploy the solution in GPU clusters. In this work, we propose DistTGL - an efficient and scalable solution to train memory-based TGNNs on distributed GPU clusters. DistTGL has three improvements over existing solutions: an enhanced TGNN model, a novel training algorithm, and an optimized system. In experiments, DistTGL achieves near-linear convergence speedup, outperforming the state-of-the-art single-machine method by 14.5% in accuracy and 10.17x in training throughput.
Original language | English (US) |
---|---|
Title of host publication | SC 2023 - International Conference for High Performance Computing, Networking, Storage and Analysis |
Publisher | IEEE Computer Society |
ISBN (Electronic) | 9798400701092 |
DOIs | |
State | Published - 2023 |
Externally published | Yes |
Event | 2023 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2023 - Denver, United States Duration: Nov 12 2023 → Nov 17 2023 |
Publication series
Name | International Conference for High Performance Computing, Networking, Storage and Analysis, SC |
---|---|
ISSN (Print) | 2167-4329 |
ISSN (Electronic) | 2167-4337 |
Conference
Conference | 2023 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2023 |
---|---|
Country/Territory | United States |
City | Denver |
Period | 11/12/23 → 11/17/23 |
Bibliographical note
Publisher Copyright:© 2023 ACM.
Keywords
- Distributed algorithms
- Neural net-works