Abstract
Modern dynamical systems are rapidly incorporating artificial intelligence to improve the efficiency and quality of complex predictive analytics. To efficiently operate on increasingly large datasets and intrinsically dynamic non-euclidean data structures, the computing community has turned to Graph Neural Networks (GNNs). We make a key observation that existing GNN processing frameworks do not efficiently handle the intrinsic dynamics in modern GNNs. The dynamic processing of GNN operates on the complete static graph at each time step, leading to repetitive redundant computations that introduce tremendous under-utilization of system resources. We propose a novel dynamic graph neural network (DGNN) processing framework that captures the dynamically evolving dataflow of the GNN semantics, i.e., graph embeddings and sparse connections between graph nodes. The framework identifies intrinsic redundancies in node-connections and captures representative node-sparse graph information that is readily ingested for processing by the system. Our evaluation on an NVIDIA GPU shows up to 3.5× speedup over the baseline setup that processes all nodes at each time step.
Original language | English (US) |
---|---|
Pages (from-to) | 1-4 |
Number of pages | 4 |
Journal | IEEE Computer Architecture Letters |
DOIs | |
State | Accepted/In press - 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Adaptation models
- Computational modeling
- Data models
- Distributed and scalable parallelism
- dynamic graphs
- graph neural networks
- Graph neural networks
- Graphics processing units
- Kernel
- Redundancy