Roland L. Lee, Pen Chung Yew, Duncan H. Lawrie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

32 Scopus citations


Cache design is explored for large high-performance multiprocessors with hundreds or thousands of processors and memory modules interconnected by a pipelined multistage network. Multiprocessor conditions are identified and modeled, including: (1) the cost of a cache coherence enforcement scheme; (2) the effect of a high degree of overlap between cache miss services; (3) the cost of a pin-limited data path between shared memory and caches; (4) the effect of a high degree of data prefetching; (5) the program behavior of a scientific workload, as represented by 23 numerical subroutines; and (6) the parallel execution of programs. This model is used to show that the cache miss ratio is not a suitable performance measure in the multiprocessors of interest and to show that the optimal cache block size in such multiprocessors is much smaller than in many uniprocessors.

Original languageEnglish (US)
Title of host publicationConference Proceedings - Annual Symposium on Computer Architecture
Number of pages10
ISBN (Print)0818607769, 9780818607769
StatePublished - 1987

Publication series

NameConference Proceedings - Annual Symposium on Computer Architecture
ISSN (Print)0149-7111


Dive into the research topics of 'MULTIPROCESSOR CACHE DESIGN CONSIDERATIONS.'. Together they form a unique fingerprint.

Cite this