TY - JOUR
T1 - SCV-GNN
T2 - Sparse Compressed Vector-Based Graph Neural Network Aggregation
AU - Unnikrishnan, Nanda K.
AU - Gould, Joe
AU - Parhi, Keshab K.
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2023/12/1
Y1 - 2023/12/1
N2 - Graph neural networks (GNNs) have emerged as a powerful tool to process graph-based data in fields like communication networks, molecular interactions, chemistry, social networks, and neuroscience. GNNs are characterized by the ultrasparse nature of their adjacency matrix that necessitates the development of dedicated hardware beyond general-purpose sparse matrix multipliers. While there has been extensive research on designing dedicated hardware accelerators for GNNs, few have extensively explored the impact of the sparse storage format on the efficiency of the GNN accelerators. This article proposes SCV-GNN with the novel sparse compressed vectors (SCVs) format optimized for the aggregation operation. We use Z -Morton ordering to derive a data-locality-based computation ordering and partitioning scheme. This article also presents how the proposed SCV-GNN is scalable on a vector processing system. Experimental results over various datasets show that the proposed method achieves a geometric mean speedup of 7.96× and 7.04× over compressed sparse column (CSC) and compressed sparse row (CSR) aggregation operations, respectively. The proposed method also reduces the memory traffic by a factor of 3.29× and 4.37× over CSC and CSR, respectively. Thus, the proposed novel aggregation format reduces the latency and memory access for GNN inference.
AB - Graph neural networks (GNNs) have emerged as a powerful tool to process graph-based data in fields like communication networks, molecular interactions, chemistry, social networks, and neuroscience. GNNs are characterized by the ultrasparse nature of their adjacency matrix that necessitates the development of dedicated hardware beyond general-purpose sparse matrix multipliers. While there has been extensive research on designing dedicated hardware accelerators for GNNs, few have extensively explored the impact of the sparse storage format on the efficiency of the GNN accelerators. This article proposes SCV-GNN with the novel sparse compressed vectors (SCVs) format optimized for the aggregation operation. We use Z -Morton ordering to derive a data-locality-based computation ordering and partitioning scheme. This article also presents how the proposed SCV-GNN is scalable on a vector processing system. Experimental results over various datasets show that the proposed method achieves a geometric mean speedup of 7.96× and 7.04× over compressed sparse column (CSC) and compressed sparse row (CSR) aggregation operations, respectively. The proposed method also reduces the memory traffic by a factor of 3.29× and 4.37× over CSC and CSR, respectively. Thus, the proposed novel aggregation format reduces the latency and memory access for GNN inference.
KW - Accelerator architectures
KW - aggregation
KW - graph neural networks (GNNs)
KW - neural network inference
UR - http://www.scopus.com/inward/record.url?scp=85164446590&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164446590&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2023.3291672
DO - 10.1109/TCAD.2023.3291672
M3 - Article
AN - SCOPUS:85164446590
SN - 0278-0070
VL - 42
SP - 4803
EP - 4816
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 12
ER -