Designing a Cost-Effective Cache Replacement Policy using Machine Learning

Subhash Sethumurugan, Jieming Yin, John Sartori

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Extensive research has been carried out to improve cache replacement policies, yet designing an efficient cache replacement policy that incurs low hardware overhead remains a challenging and time-consuming task. Given the surging interest in applying machine learning (ML) to challenging computer architecture design problems, we use ML as an offline tool to design a cost-effective cache replacement policy. We demonstrate that ML is capable of guiding and expediting the generation of a cache replacement policy that is competitive with state-of-The-Art hand-crafted policies. In this work, we use Reinforcement Learning (RL) to learn a cache replacement policy. After analyzing the learned model, we are able to focus on a few critical features that might impact system performance. Using the insights provided by RL, we successfully derive a new cache replacement policy-Reinforcement Learned Replacement (RLR). Compared to the state-of-The-Art policies, RLR has low hardware overhead, and it can be implemented without needing to modify the processor's control and data path to propagate information such as program counter. On average, RLR improves single-core and four-core system performance by 3.25% and 4.86% over LRU, with an overhead of 16.75KB for 2MB last-level cache (LLC) and 67KB for 8MB LLC.

Original languageEnglish (US)
Title of host publicationProceeding - 27th IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
PublisherIEEE Computer Society
Pages291-303
Number of pages13
ISBN (Electronic)9780738123370
DOIs
StatePublished - Feb 1 2021
Event27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021 - Virtual, Seoul, Korea, Republic of
Duration: Feb 27 2021Mar 1 2021

Publication series

Name2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA)

Conference

Conference27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
Country/TerritoryKorea, Republic of
CityVirtual, Seoul
Period2/27/213/1/21

Bibliographical note

Publisher Copyright:
© 2021 IEEE.

Keywords

  • n/a

Fingerprint

Dive into the research topics of 'Designing a Cost-Effective Cache Replacement Policy using Machine Learning'. Together they form a unique fingerprint.

Cite this