2007 IEEE 10th International Symposium on Workload Characterization
Download PDF

Abstract

Statistical sampling, especially stratified random sampling, is a promising technique for estimating the performance of the benchmark program without executing the complete program on microarchitecture simulators or real machines. The accuracy of the performance estimate and the simulation cost depend on the three parameters, namely the interval size, the sample size, and the number of phases (or strata). Optimum values for these three parameters depends on the performance behavior of the program and the microarchitecture configuration being evaluated. In this paper, we quantify the effect of these three parameters and their interactions on the accuracy of the performance estimate and simulation cost. We use the Confidence Interval of estimated Mean (CIM), a metric derived from statistical sampling theory, to measure the accuracy of the performance estimate; we also discuss why CIM is an appropriate metric for this analysis. We use the total number of instructions simulated and the total number of samples measured as cost parameters. Finally, we characterize 21 SPEC CPU2000 benchmarks based on our analysis.
Like what you’re reading?
Already a member?Sign In
Member Price
$11
Non-Member Price
$21
Add to CartSign In
Get this article FREE with a new membership!

Related Articles