IR-Aware ECO Timing Optimization Using Reinforcement Learning

Wenjing Jiang, Vidya A Chhabria, Sachin S. Sapatnekar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Engineering change orders (ECOs) in late stages make minimal design fixes to recover from timing shifts due to excessive IR drops. This paper integrates IR-drop-aware timing analysis and ECO timing optimization using reinforcement learning (RL). The method operates after physical design and power grid synthesis, and rectifies IR-drop-induced timing degradation through gate sizing. It incorporates the Lagrangian relaxation (LR) technique into a novel RL framework, which trains a relational graph convolutional network (R-GCN) agent to sequentially size gates to fix timing violations. The R-GCN agent outperforms a classical LR-only algorithm: in an open 45nm technology, it (a) moves the Pareto front of the delay-power tradeoff curve to the left (b) saves runtime over the prior approaches by running fast inference using trained models, and (c) reduces the perturbation to placement by sizing fewer cells. The RL model is transferable across timing specifications and to unseen designs with fine tuning.

Original languageEnglish (US)
Title of host publicationMLCAD 2024 - Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9798400706998
DOIs
StatePublished - Sep 9 2024
Event6th ACM/IEEE International Symposium on Machine Learning for CAD, MLCAD 2024 - Snowbird, United States
Duration: Sep 9 2024Sep 11 2024

Publication series

NameMLCAD 2024 - Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD

Conference

Conference6th ACM/IEEE International Symposium on Machine Learning for CAD, MLCAD 2024
Country/TerritoryUnited States
CitySnowbird
Period9/9/249/11/24

Bibliographical note

Publisher Copyright:
© 2024 ACM.

Fingerprint

Dive into the research topics of 'IR-Aware ECO Timing Optimization Using Reinforcement Learning'. Together they form a unique fingerprint.

Cite this