Yearly Archives: 2009

Interconnect analysis: InfiniBand and 10GigE in High-Performance Computing

InfiniBand and Ethernet are the leading interconnect solutions for connecting servers and storage systems in high-performance computing and in enterprise (virtualized or not) data centers. Recently, the HPC Advisory Council has put together the most comprehensive database for high-performance computing applications to help users understand the performance, productivity, efficiency and scalability differences between InfiniBand and 10 Gigabit Ethernet.

In summary, there are a large number of HPC applications that need the lowest possible latency for best performance or the highest bandwidth (for example Oil&Gas applications as well as weather related applications). There are some HPC applications that are not latency sensitive. For example, gene sequencing and some bioinformatics applications are not sensitive to latency and scale well with TCP-based networks including GigE and 10GigE. For HPC converged networks, putting HPC message passing traffic and storage traffic on a single TCP network may not provide enough data throughput for either. Finally, there is a number of examples that show 10GigE has limited scalability for HPC applications and InfiniBand proves to be a better performance, price/performance, and power solution than 10GigE.

The complete report can be found under the HPC Advisory Council case studies or by clicking here.

40GigE is here!

Today we launched ConnectX®-2 EN 40G converged network adapter card, the world’s first 40 Gigabit Ethernet adapter solution. ConnectX-2 EN 40G enables data centers to maximize the utilization of the latest multi-core processors, achieve unprecedented Ethernet server and storage connectivity, and advance LAN and SAN unification efforts. Mellanox’s 40 Gigabit Ethernet converged network adapter sets the stage for next-generation data centers by enabling high-bandwidth Ethernet fabrics optimized for efficiency while reducing costs, power, and complexity.

Available today, ConnectX-2 EN 40G supports hardware-based I/O virtualization, including Single Root I/O Virtualization (SR-IOV), and delivers the features needed for a converged network with support for Data Center Bridging (DCB). Mellanox’s 40 Gigabit Ethernet converged network adapter solution simplifies FCoE deployment with T11 Fibre Channel frame encapsulation support and hardware offloads. The single port ConnectX-2 EN 40G adapter comes with one QSFP connector suitable for use with copper or fiber optic cables to provide the highest flexibility to IT managers.

As part of Mellanox’s comprehensive portfolio of 10 Gigabit Ethernet and InfiniBand adapters, ConnectX-2 EN 40G is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XEN Server. ConnectX-2 EN 40G supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks.

ROI through efficiency and utilization

High-performance computing provides an invaluable role in research, product development and education. It helps accelerate time to market, and provides significant cost reductions in product development and tremendous flexibility. One strength in high-performance computing is the ability to achieve best sustained performance by driving the CPU performance towards its limits. Over the past decade, high-performance computing has migrated from supercomputers to commodity clusters. More than eighty percent of the world’s Top500 compute system installations in June 2009 were clusters. The driver for this move appears to be a combination of Moore’s Law (enabling higher performance computers at lower costs) and the ultimate drive for the best cost/performance and power/performance. Cluster productivity and flexibility are the most important factors for a cluster’s hardware and software configuration.

A deeper examination of the world’s Top500 systems based on commodity clusters shows two main interconnect solutions that are being used to connect the servers for creating those compute powerful systems – InfiniBand and Ethernet. If we divide the systems according to the interconnect family, we will see that the same CPUs, memory speed and other settings are common between the two groups. The only difference between the two groups, besides the interconnect, is the system efficiency, or how many of CPU cycles can be dedicated to the application work, and how many of them will be wasted. The below graph list the systems according to their interconnect setting, and their measured efficiency. 

Top500 Interconnect Efficiency
Top500 Interconnect Efficiency

As seen, systems connected with Ethernet achieves an average 50% efficiency, which means that 50% of the CPU cycles are wasted on non-application work or are idle, waiting for data to arrive.  Systems connected with InfiniBand achieve an above 80% efficiency average, which means that less than 20% of the CPU cycles are wasted. Moreover, the latest InfiniBand based systems have demonstrated up to 94% efficiency (the best Ethernet connected systems demonstrated 63% efficiency).

People might argue that the Linpack benchmark is not the best benchmark for measuring parallel application efficiency, and does not fully utilize the network. The graph results are a clear indication that even for the Linpack application, the network does make a difference, and for better parallel application, the gap will be much higher.

When choosing the system setting, with the notion of maximizing return on investment, one needs to make sure no artificial bottlenecks will be created. Multi-core platforms, parallel applications, large databases etc require fast data exchange and lots of it. Ethernet can become the system bottleneck due to latency/bandwidth and CPU overhead due to the TCP/UDP processing (TOE solutions introduce other issues, sometime more complicated, but this is a topic for another blog) and reduce the system efficiency to 50%. This means that half of the compute system is wasted, and just consumes power and cooling. Same performance capability could have been achieved with half of the servers if they were connected with InfiniBand. More data on different application performance, productivity and ROI, can be found at the HPC Advisory Council web site, under content/best practices.

While InfiniBand will demonstrate higher efficiency and productivity, there are several ways to increase Ethernet efficiency. One of them is optimizing the transport layer to provide zero copy and lower CPU overhead (not by using TOE solutions, as those introduce single points of failure in the system). This capability is known as LLE (low latency Ethernet). More on LLE will be discussed in future blogs.

Gilad Shainer, Director of Technical Marketing
gilad@mellanox.com

Thanks for coming to see us at VMworld

VMworld was everything we expected and more. The traffic was tremendous and we had a lot of excitement and buzz in our booth (especially after we won the Best of VMworld in the Cloud Computing category). Just in case you were unable to sit through one of Mellanox’s presentations, or from one of our partners (Xsigo, HP, Intalio, RNA Networks, and OpenFabrics Alliance), we went ahead and video taped the sessions, and have posted them below.

 

 Mellanox – F.U.E.L. Efficient Virtualized Data Centers

 

 Mellanox – On-Demand Network Services

 

 Intalio – Private Cloud Platform

 

 HP BladeSystem and ExSO SL-Series

 

 Xsigo – How to Unleash vSphere’s Full Potential with Xsigo Virtual I/O

 

 RNA Networks – Virtual Memory

 

 OpenFabrics Alliance – All things Virtual with OpenFabrics and IB

Winning Gold at VMworld

We were very excited to announce today that the Intalio|Cloud Appliance, accelerated by Mellanox 40Gb/s InfiniBand, has received a Best of VMworld 2009 award in the Cloud Computing Technologies category. It’s not too late to see what the fuss is all about. The Intalio|Cloud Appliance is being demonstrated at our booth (#2220) at VMworld in San Francisco. This is the last day to see us!

Visit Mellanox at VMworld

Are you planning to come to San Francisco and attend VMworld (August 31 – September 3)? Come see Mellanox Technologies at booth (#2220) where will be showcasing our industry-leading, end-to-end 40Gb/s and 10 Gigabit Ethernet connectivity products.

Mellanox will also be hosting a live demonstration of Intalio’s Intalio|Cloud Appliance, a single rack all the hardware and software required for building a true enterprise-class private cloud computing platform. The hardware is made of standard components using HP BladeSystem blade servers and enclosures, Solid State Drives (SSD) for all database storage, and Mellanox InfiniBand interconnect technology to effectively connect them all together. Using InfiniBand, the Intalio|Cloud Appliance benefits from a unified networking fabric, removing the need for deploying and managing multiple networking technologies such as Ethernet and Fibre Channel, thereby reducing networking hardware acquisition costs by up to 50% and network management costs by up to 30%, while boosting performance.

Have a moment for a 10-minute presentation? We have the full day scheduled with a variety of presentations from leading and new innovative companies such as HP, Xsigo, Intalio, RNA Networks, and the OpenFabrics Alliance, as well as ongoing presentations from Mellanox on several new product announcements that will be made during the conference.

Breaking the Cloud “I/O Barrier”

Mellanox and LINBIT just announced a collaboration with Logicworks.

Together, the companies are working to develop a high-performance replication system for Logicworks’ customers. LINBIT DRBD open source technology combined with the InfiniBand fabric from Mellanox will lower costs and make it possible to achieve unprecedented levels of input/output (I/O) performance, leading to improved cloud-based storage management and disaster recovery capabilities. The adoption of InfiniBand for the cloud-based system will provide Logicworks’ customers with the unparalleled performance that is critical for hosting latency sensitive applications.

“By utilizing both LINBIT’s DRBD technology and Mellanox’s InfiniBand interconnects, Logicworks’ customers will be able to take their cloud-based applications to the next level,” said Bart Grantham, R&D vice president, Logicworks. “We are eager to build on our relationship with LINBIT and excited to be among the first in the industry to offer such a solution to our customers.”