The Performance Effects of Power Scaling on Kernel- based Atomic Batch Transactions
MetadataShow full item record
The need to balance performance and power is essential to computer system efficiency. Today’s server-class systems commonly support autonomous power scaling of processors, memory, and disks. While processor power scaling self- governance (e.g. Intel’s Turbo Boost) can improve both performance and efficiency, there is growing evidence that at times boosting processor power (and speed) actually harms performance. In this paper, we identify clear cases where processor power scaling can reduce performance by up to 68% on two I/O intensive benchmarks. We describe a methodology for isolating the performance effects of power scaling in server-class systems. We propose a new model to explain the root causes of performance loss in the Linux kernel due to power scaling for two I/O intensive benchmarks. Using the model, we are able to identify global system locks that cause slowdowns at higher processor power (and speed) in the Linux kernel and eliminate the potential performance loss (up to 68%) from power scaling for the benchmarks studied. We provide a detailed case study of the effects of power scaling on one type of Linux kernel-based lock (i.e. atomic batch transactions) and we discuss future performance challenges for power scalable systems.