Convolutional Neural Networks (CNN) are widely used in different artificial intelligence (AI) applications. Major part of the computation of a CNN involves 2D convolution. In this paper, we propose novel fast convolution algorithms for both 1D and 2D to remove the redundant multiplication operations in convolution computations at the cost of controlled increase of addition operations. For example, when the 2D processing block size is 3\times 3 , our algorithm has multiplication saving factor as high as 3.24, compared to direct 2D convolution computation scheme. The proposed algorithm can also process input feature maps and generate output feature maps with the same flexible block sizes that are independent of convolution weight kernel size. The memory access efficiency is also largely improved by the proposed method. These structures can be applied to different CNN layers, such as convolution with stride > 1, pooling and deconvolution by exploring flexible feature map processing tile sizes. The proposed algorithm is suitable for both software and hardware implementation.
|Original language||English (US)|
|Number of pages||14|
|Journal||IEEE Transactions on Circuits and Systems I: Regular Papers|
|State||Published - May 2020|
Bibliographical noteFunding Information:
Manuscript received November 5, 2019; revised December 20, 2019; accepted January 4, 2020. Date of publication January 22, 2020; date of current version May 1, 2020. The work of Keshab K. Parhi was supported by the National Science Foundation under Grant CCF-1814759. This article was recommended by Associate Editor G. Jovanovic Dolecek. (Corresponding author: Chao Cheng.) Chao Cheng is with the AI Computation Technologies Laboratory, Alibaba Damo Academy, Sunnyvale, CA 94085 USA (e-mail: firstname.lastname@example.org).
© 2004-2012 IEEE.
Copyright 2020 Elsevier B.V., All rights reserved.
- Convolutional neural network
- Kronecker product
- Winograd algorithm
- fast convolution
- parallel FIR filter