Backpropagation Computation for Training Graph Attention Networks

Joe Gould, Keshab K. Parhi

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Graph Neural Networks (GNNs) are a form of deep learning that have found use for a variety of problems, including the modeling of drug interactions, time-series analysis, and traffic prediction. They represent the problem using non-Euclidian graphs, allowing for a high degree of versatility, and are able to learn complex relationships by iteratively aggregating more contextual information from neighbors that are farther away. Inspired by its power in transformers, Graph Attention Networks (GATs) incorporate an attention mechanism on top of graph aggregation. GATs are considered the state of the art due to their superior performance. To learn the best parameters for a given graph problem, GATs use traditional backpropagation to compute weight updates. To the best of our knowledge, these updates are calculated in software, and closed-form equations describing their calculation for GATs aren’t well known. This paper derives closed-form equations for backpropagation in GATs using matrix notation. These equations can form the basis for design of hardware accelerators for training GATs.

Original languageEnglish (US)
Pages (from-to)1-14
Number of pages14
JournalJournal of Signal Processing Systems
Volume96
Issue number1
DOIs
StatePublished - Jan 2024

Bibliographical note

Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.

Keywords

  • Backpropagation
  • Gradient computation
  • Graph attention networks
  • Neural network training

Fingerprint

Dive into the research topics of 'Backpropagation Computation for Training Graph Attention Networks'. Together they form a unique fingerprint.

Cite this