TY - GEN
T1 - Analysis of statistical sampling in microarchitecture simulation
T2 - 2007 IEEE International Symposium on Workload Characterization, IISWC
AU - Kodakara, Sreekumar V.
AU - Kim, Jinpyo
AU - Lilja, David J.
AU - Hsu, Wei Chung
AU - Yew, Pen Chung
PY - 2007
Y1 - 2007
N2 - Statistical sampling, especially stratified random sampling, is a promising technique for estimating the performance of the benchmark program without executing the complete program on microarchitecture simulators or real machines. The accuracy of the performance estimate and the simulation cost depend on the three parameters, namely the interval size, the sample size, and the number of phases (or strata). Optimum values for these three parameters depends on the performance behavior of the program and the microarchitecture configuration being evaluated. In this paper, we quantify the effect of these three parameters and their interactions on the accuracy of the performance estimate and simulation cost. We use the Confidence Interval of estimated Mean (CIM), a metric derived from statistical sampling theory, to measure the accuracy of the performance estimate; we also discuss why CIM is an appropriate metric for this analysis. We use the total number of instructions simulated and the total number of samples measured as cost parameters. Finally, we characterize 21 SPEC CPU2000 benchmarks based on our analysis.
AB - Statistical sampling, especially stratified random sampling, is a promising technique for estimating the performance of the benchmark program without executing the complete program on microarchitecture simulators or real machines. The accuracy of the performance estimate and the simulation cost depend on the three parameters, namely the interval size, the sample size, and the number of phases (or strata). Optimum values for these three parameters depends on the performance behavior of the program and the microarchitecture configuration being evaluated. In this paper, we quantify the effect of these three parameters and their interactions on the accuracy of the performance estimate and simulation cost. We use the Confidence Interval of estimated Mean (CIM), a metric derived from statistical sampling theory, to measure the accuracy of the performance estimate; we also discuss why CIM is an appropriate metric for this analysis. We use the total number of instructions simulated and the total number of samples measured as cost parameters. Finally, we characterize 21 SPEC CPU2000 benchmarks based on our analysis.
UR - https://www.scopus.com/pages/publications/47349123422
UR - https://www.scopus.com/pages/publications/47349123422#tab=citedBy
U2 - 10.1109/IISWC.2007.4362190
DO - 10.1109/IISWC.2007.4362190
M3 - Conference contribution
AN - SCOPUS:47349123422
SN - 1424415616
SN - 9781424415618
T3 - Proceedings of the 2007 IEEE International Symposium on Workload Characterization, IISWC
SP - 139
EP - 148
BT - Proceedings of the 2007 IEEE International Symposium on Workload Characterization, IISWC
Y2 - 27 September 2007 through 29 September 2007
ER -