Cross-layer dynamic prefetching allocation strategies for high-performance multicores

Yin Chi Peng, Chien Chih Chen, Chia Jung Chang, Tien Fu Chen, Pen Chung Yew

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

For the last decade, there have been varying techniques for hardware prefetchers to improve the system performance. However, due to limited space and bandwidth in a multicore system, the prefetching data fetched by prefetcher may pollute L1 cache even though the data is useful, thus resulting into significant performance degradation. Most contemporary multicore systems simply disable prefetching to avoid unexpected contention. This paper proposes a cross-layer and dynamic Prefetch Allocation Management (PAM) to provide better caching strategies in a parallel environment. Our approach has two main mechanisms, targeting at the different prefetch degree and location choices to minimize the cache pollution and contention. Across a variety of SPLASH2 and PARSEC benchmark, our PAM approach can contribute up to 12% of performance improvement on a 4-core multicore system compared to the static prefetcher configuration and also saves 9.1% of the memory bandwidth consumption of memory system.

Original languageEnglish (US)
Title of host publication2013 International Symposium on VLSI Design, Automation, and Test, VLSI-DAT 2013
DOIs
StatePublished - 2013
Event2013 International Symposium on VLSI Design, Automation, and Test, VLSI-DAT 2013 - Hsinchu, Taiwan, Province of China
Duration: Apr 22 2013Apr 24 2013

Publication series

Name2013 International Symposium on VLSI Design, Automation, and Test, VLSI-DAT 2013

Other

Other2013 International Symposium on VLSI Design, Automation, and Test, VLSI-DAT 2013
Country/TerritoryTaiwan, Province of China
CityHsinchu
Period4/22/134/24/13

Fingerprint

Dive into the research topics of 'Cross-layer dynamic prefetching allocation strategies for high-performance multicores'. Together they form a unique fingerprint.

Cite this