Due to cost, time, and flexibility constraints, computer architects use simulators to explore the design space when developing new processors and to evaluate the performance of potential enhancements. However, despite this dependence on simulators, statistically rigorous simulation methodologies are typically not used in computer architecture research. A formal methodology can provide a sound basis for drawing conclusions gathered from simulation results by adding statistical rigor and consequently, can increase the architect's confidence in the simulation results. This paper demonstrates the application of a rigorous statistical technique to the setup and analysis phases of the simulation process. Specifically, we apply a Plackett and Burman design to: 1) identify key processor parameters, 2) classify benchmarks based on how they affect the processor, and 3) analyze the effect of processor enhancements. Our results showed that, out of the 41 user-configurable parameters in SimpleScalar, only 10 had a significant effect on the execution time. Of those 10, the number of reorder buffer entries and the L2 cache latency were the two most significant ones, by far. Our results also showed that Instruction Precomputation - a value reuse-like microarchitectural technique - primarily improves the processor's performance by relieving integer ALU contention.
Bibliographical noteFunding Information:
The authors would like to thank Chris Hescott, Baris Kazar, Keith Osowski, Mike Tobin, and Keqiang Wu for their helpful comments on previous drafts of this work. A preliminary version of this work was presented at the Ninth Annual International Symposium on High-Performance Computer Architecture . This work was supported in part by US National Science Foundation grants CCR-9900605 and EIA-9971666, IBM Corporation, and the Minnesota Supercomputing Institute.
- Measurement techniques
- Performance analysis and design aids
- Simulation output analysis