ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Haoran You, Zhanyi Sun, Huihong Shi, Zhongzhi Yu, Yang Zhao, Yongan Zhang, Chaojian Li, Baopu Li, Yingyan Lin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

99 Scopus citations

Abstract

Vision Transformers (ViTs) have achieved state-of-the-art performance on various vision tasks. However, ViTs' self-attention module is still arguably a major bottleneck, limiting their achievable hardware efficiency and more extensive applications to resource constrained platforms. Meanwhile, existing accelerators dedicated to NLP Transformers are not optimal for ViTs. This is because there is a large difference between ViTs and Transformers for natural language processing (NLP) tasks: ViTs have a relatively fixed number of input tokens, whose attention maps can be pruned by up to 90% even with fixed sparse patterns, without severely hurting the model accuracy (e.g., <=1.5% under 90% pruning ratio); while NLP Transformers need to handle input sequences of varying numbers of tokens and rely on on-the-fly predictions of dynamic sparse attention patterns for each input to achieve a decent sparsity (e.g., >=50%). To this end, we propose a dedicated algorithm and accelerator co-design framework dubbed ViTCoD for accelerating ViTs. Specifically, on the algorithm level, ViTCoD prunes and polarizes the attention maps to have either denser or sparser fixed patterns for regularizing two levels of workloads without hurting the accuracy, largely reducing the attention computations while leaving room for alleviating the remaining dominant data movements; on top of that, we further integrate a lightweight and learnable auto-encoder module to enable trading the dominant high-cost data movements for lower-cost computations. On the hardware level, we develop a dedicated accelerator to simultaneously coordinate the aforementioned enforced denser and sparser workloads for boosted hardware utilization, while integrating on-chip encoder and decoder engines to leverage ViTCoD's algorithm pipeline for much reduced data movements. Extensive experiments and ablation studies validate that ViTCoD largely reduces the dominant data movement costs, achieving speedups of up to 235.3×, 142.9×, 86.0×, 10.1×, and 6.8× over general computing platforms CPUs, EdgeGPUs, GPUs, and prior-art Transformer accelerators SpAtten and Sanger under an attention sparsity of 90%, respectively. Our code implementation is available at https://github.com/GATECH-EIC/ViTCoD.

Original languageEnglish (US)
Title of host publication2023 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2023 - Proceedings
PublisherIEEE Computer Society
Pages273-286
Number of pages14
ISBN (Electronic)9781665476522
DOIs
StatePublished - 2023
Externally publishedYes
Event29th IEEE International Symposium on High-Performance Computer Architecture, HPCA 2023 - Montreal, Canada
Duration: Feb 25 2023Mar 1 2023

Publication series

NameProceedings - International Symposium on High-Performance Computer Architecture
Volume2023-February
ISSN (Print)1530-0897

Conference

Conference29th IEEE International Symposium on High-Performance Computer Architecture, HPCA 2023
Country/TerritoryCanada
CityMontreal
Period2/25/233/1/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Fingerprint

Dive into the research topics of 'ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design'. Together they form a unique fingerprint.

Cite this