A length adaptive algorithm-hardware co-design of transformer on FPGA through sparse attention and dynamic pipelining

Hongwu Peng, Shaoyi Huang, Shiyang Chen, Bingbing Li, Tong Geng, Ang Li, Weiwen Jiang, Wujie Wen, Jinbo Bi, Hang Liu, Caiwen Ding

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Scopus citations

Abstract

Transformers are considered one of the most important deep learning models since 2018, in part because it establishes state-of-the-art (SOTA) records and could potentially replace existing Deep Neural Networks (DNNs). Despite the remarkable triumphs, the prolonged turnaround time of Transformer models is a widely recognized roadblock. The variety of sequence lengths imposes additional computing overhead where inputs need to be zero-padded to the maximum sentence length in the batch to accommodate the parallel computing platforms. This paper targets the field-programmable gate array (FPGA) and proposes a coherent sequence length adaptive algorithm-hardware co-design for Transformer acceleration. Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm. The proposed sparse attention operator brings the complexity of attention-based models down to linear complexity and alleviates the off-chip memory traffic. The proposed length-aware resource hardware scheduling algorithm dynamically allocates the hardware resources to fill up the pipeline slots and eliminates bubbles for NLP tasks. Experiments show that our design has very small accuracy loss and has 80.2 × and 2.6 × speedup compared to CPU and GPU implementation, and 4 × higher energy efficiency than state-of-the-art GPU accelerator optimized via CUBLAS GEMM.

Original languageEnglish (US)
Title of host publicationProceedings of the 59th ACM/IEEE Design Automation Conference, DAC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1135-1140
Number of pages6
ISBN (Electronic)9781450391429
DOIs
StatePublished - Jul 10 2022
Externally publishedYes
Event59th ACM/IEEE Design Automation Conference, DAC 2022 - San Francisco, United States
Duration: Jul 10 2022Jul 14 2022

Publication series

NameProceedings - Design Automation Conference
ISSN (Print)0738-100X

Conference

Conference59th ACM/IEEE Design Automation Conference, DAC 2022
Country/TerritoryUnited States
CitySan Francisco
Period7/10/227/14/22

Bibliographical note

Publisher Copyright:
© 2022 ACM.

Keywords

  • attention
  • BERT
  • FPGA
  • length adaptive
  • transformer

Fingerprint

Dive into the research topics of 'A length adaptive algorithm-hardware co-design of transformer on FPGA through sparse attention and dynamic pipelining'. Together they form a unique fingerprint.

Cite this