TY - JOUR
T1 - Design and optimization of energy-accuracy tradeoff networks for mobile platforms via pretrained deep models
AU - Jayakodi, Nitthilan Kanappan
AU - Belakaria, Syrine
AU - Deshwal, Aryan
AU - Doppa, Janardhan Rao
N1 - Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/2/7
Y1 - 2020/2/7
N2 - Many real-world edge applications including object detection, robotics, and smart health are enabled by deploying deep neural networks (DNNs) on energy-constrained mobile platforms. In this article, we propose a novel approach to trade off energy and accuracy of inference at runtime using a design space called Learning Energy Accuracy Tradeoff Networks (LEANets). The key idea behind LEANets is to design classifiers of increasing complexity using pretrained DNNs to perform input-specific adaptive inference. The accuracy and energy consumption of the adaptive inference scheme depends on a set of thresholds, one for each classifier. To determine the set of threshold vectors to achieve different energy and accuracy tradeoffs, we propose a novel multiobjective optimization approach. We can select the appropriate threshold vector at runtime based on the desired tradeoff. We perform experiments on multiple pretrained DNNs including ConvNet, VGG-16, and MobileNet using diverse image classification datasets. Our results show that we get up to a 50% gain in energy for negligible loss in accuracy, and optimized LEANets achieve significantly better energy and accuracy tradeoff when compared to a state-of-the-art method referred to as Slimmable neural networks.
AB - Many real-world edge applications including object detection, robotics, and smart health are enabled by deploying deep neural networks (DNNs) on energy-constrained mobile platforms. In this article, we propose a novel approach to trade off energy and accuracy of inference at runtime using a design space called Learning Energy Accuracy Tradeoff Networks (LEANets). The key idea behind LEANets is to design classifiers of increasing complexity using pretrained DNNs to perform input-specific adaptive inference. The accuracy and energy consumption of the adaptive inference scheme depends on a set of thresholds, one for each classifier. To determine the set of threshold vectors to achieve different energy and accuracy tradeoffs, we propose a novel multiobjective optimization approach. We can select the appropriate threshold vector at runtime based on the desired tradeoff. We perform experiments on multiple pretrained DNNs including ConvNet, VGG-16, and MobileNet using diverse image classification datasets. Our results show that we get up to a 50% gain in energy for negligible loss in accuracy, and optimized LEANets achieve significantly better energy and accuracy tradeoff when compared to a state-of-the-art method referred to as Slimmable neural networks.
KW - Deep neural networks
KW - Embedded systems
KW - Hardware
KW - Inference
KW - Software codesign
UR - http://www.scopus.com/inward/record.url?scp=85079571249&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079571249&partnerID=8YFLogxK
U2 - 10.1145/3366636
DO - 10.1145/3366636
M3 - Article
AN - SCOPUS:85079571249
SN - 1539-9087
VL - 19
JO - ACM Transactions on Embedded Computing Systems
JF - ACM Transactions on Embedded Computing Systems
IS - 1
M1 - 4
ER -