I just posted on the idea of biggest opportunity to green the data center is procurement. One day later AnandTech has a post on testing energy efficiency servers. If you want the version of this article that is all one page go to this print version as there are 12 pages.
Testing the latest x86 rack servers and low power server CPUs
Date: July 22nd, 2009
Topic: IT Computing
Manufacturer: Various
Author: Johan De GelasThe x86 rack server space is very crowded, but is still possible to rise above the crowd. Quite a few data centers have many "gaping holes" in the racks as they have exceeded the power or cooling capacity of the data center and it is no longer possible to add servers. One way to distinguish your server from the masses is to create a very low power server. The x86 rack server market is also very cost sensitive, so any innovation that seriously cuts the costs of buying and managing the server will draw some attention. This low power, cost sensitive part of the market does not get nearly the attention it deserves compared to the high performance servers, but it is a huge market. According to AMD, sales of their low power (HE and EE) Opterons account for up to 25% of their total server CPU sales, while the performance oriented SE parts only amount to 5% or less. Granted, AMD's presence in the performance oriented market is not that strong right now, but it is a fact that low power servers are getting more popular by the day.
The low power market is very diverse. The people in the "cloudy" data centers are - with good reason - completely power obsessed as increasing the size of a data center is a very costly affair, to be avoided at all costs. These people tend to almost automatically buy servers with low power CPUs. Then there is the large group of people, probably working in the Small and Medium Enterprise businesses (SMEs) who know they have many applications where performance is not the first priority. These people want to fill their hired rack space without paying a premium to the hosting provider for extra current. It used to be rather simple: give heavy applications the (high performance) server they need and go for the simplest, smallest, cheapest, and lowest power server for applications that peak at 15% CPU like fileservers and domain controllers. Virtualization made the server choices a lot more interesting: more performance per server does not necessarily go to waste; it can result in having to buy fewer servers, so prepare to face some interesting choices.
This article is quite long, and another reason why procurement professional would not read and digest this, plus few have the engineering skills to follow the article. The writers of this article did not even intend the BDM for server purchasing in this article, thinking the Server Admin and CIO are the audience.
Does that mean this article is only for server administrators and CIOs? Well, we feel that the hardware enthusiasts will find some interesting info too. We will test seven different CPUs, so this article will complement our six-core Opteron "Istanbul" and quad-core Xeon "Nehalem" reviews. How do lower end Intel "Nehalem" Xeons compare with the high end quad-core Opterons? What's the difference between a lower clocked six-core and a highly clocked quad-core? How much processing power do you have to trade when moving from a 95W TDP Xeon to a 60W TDP chip? What happens when moving from a 75W ACP (105W TDP) six-core Opteron to a 40W ACP (55W TDP) quad-core Opteron? These questions are not the ultimate goal of this article, but it should shed some light on these topics for the interested.
How many procurement people would understand this?
The Supermicro Twin2
This is the most innovative server of this review. Supermicro places four servers in a 2U chassis and feeds them with two redundant 1200W PSUs. The engineers at Supermicro have thus been able to combine very high density with redundancy - no easy feat. Older Twin servers were only attractive to the HPC world were computing density and affordable prices were the primary criteria. Thanks to the PSU redundancy, the Twin2 should provide better serviceability and appeal to people looking for a web, database, or virtualization server.
Most versions of this chassis support hot swappable server nodes, which makes the Twin2 a sort of mini-blade. Sure, you don't have the integrated networking and KVM of a blade, but on the flip side this thing does not come with TCO increasing yearly software licenses and the obligatory expensive support contracts.
By powering four nodes with a 1+1 PSU, Supermicro is able to offer redundancy and at the same time can make sure that the PSU always runs at a decent load, thus providing better efficiency. According to Supermicro, the 1200W power supplies can reach up to 93% efficiency. This is confirmed by the fact that the power supply is certified by the Electric Power Research Institute as an "80+ Gold" PSU with a 92.4% power efficiency at 50% load and 91.2% at 20% load. With four nodes powered, it is very likely that the PSU will normally run between these percentages. Power consumption is further reduced by using only four giant 80mm fans. Unfortunately, and this is a real oversight by Supermicro, as the fans are not easy to unplug and replace. We want hot swappable fans.
Supermicro managed to squeeze two CPUs and 12 DIMM slots on the tiny boards, which means that you can outfit each node with 48GB of relatively cheap 4GB DIMMs. Another board has a Mellanox Infiniband controller and connector onboard, and both QDR and DDR Infiniband are available. To top it off, Supermicro has chosen the Matrox G200W as a 2D card, which is good for those who still access their servers directly via KVM. Supermicro did make a few compromises: you cannot use Xeons with a TDP higher than 95W (who needs those 130W monsters anyway?), 8GB DIMMs seem to be supported only on a few SKUs right now, and there is only one low profile PCI-e x16 expansion slot.
The Twin2 chassis can be outfitted with boards that support Intel "Nehalem Xeons" as well as AMD "Istanbul Opterons". The "Istanbul version" came out while we were typing this and was thus not included in this review.
Power measurement used this gear.
Power was measured at the wall by two devices: the Extech 38081…
…and the Ingrasys iPoman II 1201. A big thanks to Roel De Frene of Triple S for letting us test this unit.
The Extech device allows us track power consumption each second, the iPoman only each minute. With the Supermicro Twin2, we wanted to measure four servers at once, and the iPoman device was handy for measuring power consumption of several server nodes at once. We double-checked power consumption readings with the Extech 38081.