Abstract
Computer systems increasingly rely on dynamic, phase-based system management techniques, in which system hardware and software parameters may be altered or tuned at runtime for different program phases. Prior research has considered a range of possible phase analysis techniques, but has focused almost exclusively on performance-oriented phases; the notion of power-oriented phases has not been explored. Moreover, the bulk of phase-analysis studies have focused on simulation evaluation. There is need for real-system experiments that provide direct comparison of different practical techniques (such as control flow sampling, event counters, and power measurements) for gauging phase behavior. In this paper, we propose and evaluate a live, real-system measurement framework for collecting and analyzing power phases in running applications. Our experimental frameworks simultaneously collects control flow, performance counter and live power measurement information. Using this framework, we directly compare between code-oriented techniques (such as "basic block vectors") and performance counter techniques for characterizing power phases. Across a collection of both SPEC2000 benchmarks as well as mainstream desktop applications, our results indicate that both techniques are promising, but that performance counters consistently provide better representation of power behavior. For many of the experimented cases, basic block vectors demonstrate a strong relationship between the execution path and power consumption. However, there are instances where power behavior cannot be captured from control flow, for example due to differences in memory hierarchy performance. We demonstrate these with examples from real applications. Overall, counter-based techniques offer average classification errors of 1.9% for SPEC and 7.1% for other benchmarks, while basic block vectors achieve 2.9% average errors for SPEC and 11.7% for other benchmarks respectively.