Virtualization and consolidation are powerful enablers for cloud computing and energy efficiency. In this project, we develop techniques to quantify and manage application performance in consolidated settings.
Consolidation of workloads is an effective means for saving energy since it improves the resource utilization and allows the idle power cost of keeping servers active to be amortized over a greater amount of useful work done. However, consolidation and increased resource utilization also lead to higher resource contention and beyond a certain point, the increased contention causes the throughput to degrade to such an extent that there is no benefit from consolidation. The figure below shows the energy per transaction for an example transaction processing application. As more and more instances of the same application are consolidated on the server, leading to higher processor and disk utilization, we see that the energy use first reduces and then again increases. The exact tipping points of course depend on the server hardware used and the software application characteristics.
Virtualization provides a powerful mechanism to guard against such resource contentions by providing strict resource partitioning among consolidated workloads. However, current virtualization technologies do not partition certain shared hardware managed resources such as processor caches and memory bandwidth. The figure below shows the performance degradation in some sample benchmark applications when consolidated with virtualization and assigned dedicated processor cores. Each application is executed in a VM allocated a separate processor core, reserved memory, and disk space. The performance degradation is plotted with respect to the same application executed on the exact same resources but with the other processor cores left idle.
Our research is developing methods to manage resource contentions and performance degradation. Our methods actively predicting the degradation that will be observed after applications are consolidated allowing for intelligent decisions regarding which application sets should be consolidated. We develop methods that select the optimal workload placements to make desirable performance and energy trade-offs. In addition to selecting performance aware placements, we also proactively allocate additional resources to selected applications in real time to compensate for unavoidable degradations. The specific methods used are discussed in the publications listed below and are also part of our ongoing work.
- Alan Roytman, Aman Kansal, Sriram Govindan, Jie Liu, and Suman Nath, PACMan: Performance Aware Virtual Machine Consolidation, in 10th International Conference on Autonomic Computing (ICAC), USENIX, 26 June 2013
- Alan Roytman, Aman Kansal, Sriram Govindan, Jie Liu, and Suman Nath, Algorithm Design for Performance Aware VM Consolidation, no. MSR-TR-2013-28, 4 March 2013
- Sriram Govindan, Jie Liu, Aman Kansal, and Anand Sivasubramaniam, Cuanta: Quantifying Effects of Shared On-chip Resource Interference for Consolidated Virtual Machines, in ACM Symposium on Cloud Computing (SOCC), ACM, 27 October 2011
- Sriram Govindan, Jie Liu, Aman Kansal, and Anand Sivasubramaniam, Cuanta: Quantifying Effects of Shared On-chip Resource Interference for Consolidated Virtual Machines, no. MSR-TR-2011-55, May 2011
- Ripal Nathuji, Aman Kansal, and Alireza Ghaffarkhah, Q-Clouds: Managing Performance Interference Effects for QoS-Aware Clouds, in Eurosys 2010, Association for Computing Machinery, Inc., 13 April 2010
- Shekhar Srikantaiah, Aman Kansal, and Feng Zhao, Energy Aware Consolidation for Cloud Computing, in USENIX HotPower'08: Workshop on Power Aware Computing and Systems at OSDI, USENIX, 7 December 2008