TY - JOUR
T1 - Performance Analysis of CNN Inference/Training with Convolution and Non-Convolution Operations on ASIC Accelerators
AU - Esmaeilzadeh, Hadi
AU - Ghodrati, Soroush
AU - Kahng, Andrew B.
AU - Kinzer, Sean
AU - Manasi, Susmita Dey
AU - Sapatnekar, Sachin S.
AU - Wang, Zhiang
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/11/8
Y1 - 2024/11/8
N2 - Today's performance analysis frameworks for deep learning accelerators suffer from two significant limitations. First, although modern convolutional neural networks (CNNs) consist of many types of layers other than convolution, especially during training, these frameworks largely focus on convolution layers only. Second, these frameworks are generally targeted towards inference and lack support for training operations. This work proposes a novel open-source performance analysis framework, SimDIT, for general ASIC-based systolic hardware accelerator platforms. The modeling effort of SimDIT comprehensively covers convolution and non-convolution operations of both CNN inference and training on a highly parameterizable hardware substrate. SimDIT is integrated with a backend silicon implementation flow and provides detailed end-to-end performance statistics (i.e., data access cost, cycle counts, energy, and power) for executing CNN inference and training workloads. SimDIT-enabled performance analysis reveals that on a 64×64 processing array, non-convolution operations constitute 59.5% of total runtime for ResNet-50 training workload. In addition, by optimally distributing available off-chip DRAM bandwidth and on-chip SRAM resources, SimDIT achieves 18× performance improvement over a generic static resource allocation for ResNet-50 inference.
AB - Today's performance analysis frameworks for deep learning accelerators suffer from two significant limitations. First, although modern convolutional neural networks (CNNs) consist of many types of layers other than convolution, especially during training, these frameworks largely focus on convolution layers only. Second, these frameworks are generally targeted towards inference and lack support for training operations. This work proposes a novel open-source performance analysis framework, SimDIT, for general ASIC-based systolic hardware accelerator platforms. The modeling effort of SimDIT comprehensively covers convolution and non-convolution operations of both CNN inference and training on a highly parameterizable hardware substrate. SimDIT is integrated with a backend silicon implementation flow and provides detailed end-to-end performance statistics (i.e., data access cost, cycle counts, energy, and power) for executing CNN inference and training workloads. SimDIT-enabled performance analysis reveals that on a 64×64 processing array, non-convolution operations constitute 59.5% of total runtime for ResNet-50 training workload. In addition, by optimally distributing available off-chip DRAM bandwidth and on-chip SRAM resources, SimDIT achieves 18× performance improvement over a generic static resource allocation for ResNet-50 inference.
KW - Convolutional neural network
KW - hardware accelerator
KW - inference
KW - performance simulator
KW - training
UR - http://www.scopus.com/inward/record.url?scp=85212948093&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85212948093&partnerID=8YFLogxK
U2 - 10.1145/3696665
DO - 10.1145/3696665
M3 - Article
AN - SCOPUS:85212948093
SN - 1084-4309
VL - 30
JO - ACM Transactions on Design Automation of Electronic Systems
JF - ACM Transactions on Design Automation of Electronic Systems
IS - 1
M1 - 3
ER -