Improving computer architecture simulation methodology by adding statistical rigor

Joshua J. Yi, David J. Lilja, Douglas M. Hawkins

Research output: Contribution to journalArticle

20 Scopus citations


Due to cost, time, and flexibility constraints, computer architects use simulators to explore the design space when developing new processors and to evaluate the performance of potential enhancements. However, despite this dependence on simulators, statistically rigorous simulation methodologies are typically not used in computer architecture research. A formal methodology can provide a sound basis for drawing conclusions gathered from simulation results by adding statistical rigor and consequently, can increase the architect's confidence in the simulation results. This paper demonstrates the application of a rigorous statistical technique to the setup and analysis phases of the simulation process. Specifically, we apply a Plackett and Burman design to: 1) identify key processor parameters, 2) classify benchmarks based on how they affect the processor, and 3) analyze the effect of processor enhancements. Our results showed that, out of the 41 user-configurable parameters in SimpleScalar, only 10 had a significant effect on the execution time. Of those 10, the number of reorder buffer entries and the L2 cache latency were the two most significant ones, by far. Our results also showed that Instruction Precomputation - a value reuse-like microarchitectural technique - primarily improves the processor's performance by relieving integer ALU contention.

Original languageEnglish (US)
Pages (from-to)1360-1373
Number of pages14
JournalIEEE Transactions on Computers
Issue number11
StatePublished - Nov 1 2005



  • Measurement techniques
  • Performance analysis and design aids
  • Simulation output analysis

Cite this