Intel has got to be hating this publicity. Microsoft using Intel Atom chips to build servers. I”ve been blogging on the idea of using Intel Atom’s for servers, and people laughed at the performance. But, if you think about where the future of Intel Atom chips the rapid growth of Netbooks, Intel is getting phenomenal pressure to increase performance per watt.
DataCenterKnowledge reports on Microsoft Research’s use of Intel Atom based servers at Techfest.
Microsoft’s Low-Power Server Prototype
February 24th, 2009 : Rich MillerHow low can your server power go? Microsoft is investigating that question in a project by its new Cloud Computing Futures (CCF) research unit, which aims to reduce data center costs by “four-fold or greater.” The new group was introduced today at the Microsoft TechFest in Redmond. One of CCF’s initial research projects is testing the viability of a small cloud computing server farm using low-power Intel Atom processors originally designed for use in netbooks and mobile applications.
“In addition to requiring far less energy - 5 watts versus 50 to 100 watts for a processor typically used in a data center- low-power processors also have quiescent states that consume little energy and can be awakened quickly,” explained Dan Reed, director of Scalable and Multicore Systems for Cloud Computing Futures. “These states are used in the sleep and hibernate features of laptops and netbooks. With our current Atom processor, its energy consumption when running is 28 to 34 watts, but in the sleep or hibernate state, it consumes 3 to 4 watts, a reduction of 10 times in the energy consumption of idle processors.”
In this brief video, CCF Director of Software Architecture Jim Larus demonstrates a prototype rack packed with these low-power processors:
That wasn’t the only data center project discussed at TechFest.
The article continues posting on the use of closed loop feedback for dynamically adjusting the servers available.
The Cloud Computing Futures team also discussed Marlowe, a system for selectively putting idle servers into a low-power state. Reed said Marlowe “highlights the power of an intelligent control system that can determine when to put a processor to sleep and when to awaken it to service the workload.
“This problem has two interesting challenges,” he said. “The first is to estimate how many processors are necessary to handle a given workload by responding to every request in a timely manner. (By analogy, how many checkout clerks should be at the cash registers?) The second is to anticipate the workload in the near future, since it takes 5 to 15 seconds to awaken a processor from sleep and 30 to 45 seconds for hibernate. The system needs to hold some processors in reserve and to anticipate the workload 5 to 45 seconds in the future to ensure that sufficient servers are available.”
The solution was a closed-loop control system. “It works by taking regular measurements of the system, such as CPU utilization, response time, and energy consumption; combining this data with the estimated future workload; then adjusting the number of servers in each power state,” Reed said.
But this is not a new idea. Cornell Medical school’s Biomedicine dept has been doing this for over 2 years. Here was my blog entry 1 1/2 years ago.
This facility is one of the only places I know of that turns off servers when they are not needed. For IT Pros they do the equivalent of turning off the lights when they leave the office this holiday weekend. Think about how many servers are running these next 4 days from Thurs – Sun with no load on them. Would anyone notice if they were turned off?
The amazing thing is the Biomedicine department has been turning off their servers in a high performance compute cluster for the past 6 months and the users don’t notice a change in service, because they turn off and on the compute nodes in response to the job queue. There aren’t going to be that many research scientist submitting jobs on Thanksgiving day. And, as each compute job is completed and sits idle, there is an automated system that turns off the servers. When new compute resources are required as new jobs are submitted on Monday, the machines are turned back on.
To put this in #’s there are 100 servers in the compute node which each consume as much power as six 60 watt light bulbs, and when idle drop to consuming three 60 watt light bulbs of electricity. So, if this weekend they can turn off half the machines, they’ll save one hundred fifty 60 watt light bulbs of electricity. This project is implemented by Jason Banfelder, Vanessa Borcherding, and Luis Gracia at Cornell Weill Medical University, and this team can tell their parents this holiday weekend that yes we did turn off the lights in the office when we left the office. Actually, when they left the servers were probably at 100% utilization, and as jobs completed idling servers, they were turned off.
Cornell built this in production using dell servers and OSIsoft’s PI system, so don’t think of the idea of turning off servers as a research project