Infiniband networking saves energy

If you are not familiar with the InfiniBand standard you should think about it as a way to save energy.

InfiniBand™ is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems. InfiniBand is a true fabric architecture that leverages switched, point-to-point channels with data transfers today at up to 120 gigabits per second, both in chassis backplane applications as well as through external copper and optical fiber connections.

InfiniBand™ has a robust roadmap defining increasing speeds through 2011 and 40 Gb/s InfiniBand™ products are shipping today.  The roadmap shows projected increased market demand for InfiniBand™ 1x EDR, 4x EDR, 8x EDR and 12x EDR beyond 2011, which translates to bandwidths nearing 1,000 Gb's in the next three years.

InfiniBand is a pervasive, low-latency, high-bandwidth interconnect which requires low processing overhead and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded applications that scale from two nodes up to a single cluster that interconnect thousands of nodes.

How good is InfiniBand?  It is used in 18of the top 20 green super computers.

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

If you don’t have a supercomputer, then virtualized I/O is another area.

The StorageMojo take
Good to see Iband used as a big cheap pipe. Its low latency, cheap switch ports and high bandwidth make it the best choice for this application.

VMware and Hyper-V have serious I/O problems. Xsigo helps manage them.