ABSTRACT

Energy consumption has become the primary cost of running a high-performance computing (HPC) system [1,3-5,9,12,54]. In 2007, the annual power and cooling budget matched the annual budget for new servers for the first time [31]. Today, the leading petaflop supercomputers consume a range of 1.4-7MW of electrical power, with 3.5MW on average.∗ The 3.5MW can be easily translated into 3.5 millions of dollars per year in electricity bill. This is why Google locates its data center in rural Oregon in order to take advantage of the cheap hydroelectric power generated by the nearby Grand Coulee Dam.