InfiniBand and Ethernet are the leading interconnect solutions for connecting servers and storage systems in high-performance computing and in enterprise (virtualized or not) data centers. Recently, the HPC Advisory Council has put together the most comprehensive database for high-performance computing applications to help users understand the performance, productivity, efficiency and scalability differences between InfiniBand and 10 Gigabit Ethernet.
In summary, there are a large number of HPC applications that need the lowest possible latency for best performance or the highest bandwidth (for example Oil&Gas applications as well as weather related applications). There are some HPC applications that are not latency sensitive. For example, gene sequencing and some bioinformatics applications are not sensitive to latency and scale well with TCP-based networks including GigE and 10GigE. For HPC converged networks, putting HPC message passing traffic and storage traffic on a single TCP network may not provide enough data throughput for either. Finally, there is a number of examples that show 10GigE has limited scalability for HPC applications and InfiniBand proves to be a better performance, price/performance, and power solution than 10GigE.
High-performance computing provides an invaluable role in research, product development and education. It helps accelerate time to market, and provides significant cost reductions in product development and tremendous flexibility. One strength in high-performance computing is the ability to achieve best sustained performance by driving the CPU performance towards its limits. Over the past decade, high-performance computing has migrated from supercomputers to commodity clusters. More than eighty percent of the world’s Top500 compute system installations in June 2009 were clusters. The driver for this move appears to be a combination of Moore’s Law (enabling higher performance computers at lower costs) and the ultimate drive for the best cost/performance and power/performance. Cluster productivity and flexibility are the most important factors for a cluster’s hardware and software configuration.
A deeper examination of the world’s Top500 systems based on commodity clusters shows two main interconnect solutions that are being used to connect the servers for creating those compute powerful systems – InfiniBand and Ethernet. If we divide the systems according to the interconnect family, we will see that the same CPUs, memory speed and other settings are common between the two groups. The only difference between the two groups, besides the interconnect, is the system efficiency, or how many of CPU cycles can be dedicated to the application work, and how many of them will be wasted. The below graph list the systems according to their interconnect setting, and their measured efficiency.
As seen, systems connected with Ethernet achieves an average 50% efficiency, which means that 50% of the CPU cycles are wasted on non-application work or are idle, waiting for data to arrive. Systems connected with InfiniBand achieve an above 80% efficiency average, which means that less than 20% of the CPU cycles are wasted. Moreover, the latest InfiniBand based systems have demonstrated up to 94% efficiency (the best Ethernet connected systems demonstrated 63% efficiency).
People might argue that the Linpack benchmark is not the best benchmark for measuring parallel application efficiency, and does not fully utilize the network. The graph results are a clear indication that even for the Linpack application, the network does make a difference, and for better parallel application, the gap will be much higher.
When choosing the system setting, with the notion of maximizing return on investment, one needs to make sure no artificial bottlenecks will be created. Multi-core platforms, parallel applications, large databases etc require fast data exchange and lots of it. Ethernet can become the system bottleneck due to latency/bandwidth and CPU overhead due to the TCP/UDP processing (TOE solutions introduce other issues, sometime more complicated, but this is a topic for another blog) and reduce the system efficiency to 50%. This means that half of the compute system is wasted, and just consumes power and cooling. Same performance capability could have been achieved with half of the servers if they were connected with InfiniBand. More data on different application performance, productivity and ROI, can be found at the HPC Advisory Council web site, under content/best practices.
While InfiniBand will demonstrate higher efficiency and productivity, there are several ways to increase Ethernet efficiency. One of them is optimizing the transport layer to provide zero copy and lower CPU overhead (not by using TOE solutions, as those introduce single points of failure in the system). This capability is known as LLE (low latency Ethernet). More on LLE will be discussed in future blogs.
VMworld was everything we expected and more. The traffic was tremendous and we had a lot of excitement and buzz in our booth (especially after we won the Best of VMworld in the Cloud Computing category). Just in case you were unable to sit through one of Mellanox’s presentations, or from one of our partners (Xsigo, HP, Intalio, RNA Networks, and OpenFabrics Alliance), we went ahead and video taped the sessions, and have posted them below.
Mellanox – F.U.E.L. Efficient Virtualized Data Centers
Mellanox – On-Demand Network Services
Intalio – Private Cloud Platform
HP BladeSystem and ExSO SL-Series
Xsigo – How to Unleash vSphere’s Full Potential with Xsigo Virtual I/O
RNA Networks – Virtual Memory
OpenFabrics Alliance – All things Virtual with OpenFabrics and IB
We were very excited to announce today that the Intalio|Cloud Appliance, accelerated by Mellanox 40Gb/s InfiniBand, has received a Best of VMworld 2009 award in the Cloud Computing Technologies category. It’s not too late to see what the fuss is all about. The Intalio|Cloud Appliance is being demonstrated at our booth (#2220) at VMworld in San Francisco. This is the last day to see us!
Together, the companies are working to develop a high-performance replication system for Logicworks’ customers. LINBIT DRBD open source technology combined with the InfiniBand fabric from Mellanox will lower costs and make it possible to achieve unprecedented levels of input/output (I/O) performance, leading to improved cloud-based storage management and disaster recovery capabilities. The adoption of InfiniBand for the cloud-based system will provide Logicworks’ customers with the unparalleled performance that is critical for hosting latency sensitive applications.
“By utilizing both LINBIT’s DRBD technology and Mellanox’s InfiniBand interconnects, Logicworks’ customers will be able to take their cloud-based applications to the next level,” said Bart Grantham, R&D vice president, Logicworks. “We are eager to build on our relationship with LINBIT and excited to be among the first in the industry to offer such a solution to our customers.”
ISC’09 in Hamburg, Germany, went exceptionally well. Below is a quick video of us launching the new IS5000 family of 40Gb/s InfiniBand switches to the attending press and analysts. Afterwards, Gilad Shainer, director of HPC marketing, gives you a tour on the live booth demonstrations for both 40Gb/s IB and low-latency 10 Gigabit Ethernet.
Started in 1993, the TOP500 lists the fastest computers used today, ranked according to Linpack benchmark results. Published twice a year, the TOP500 list provides an important tool for tracking usage trends in high-performance computing. The 33rd TOP500 List was released in Hamburg, Germany, during the ISC’09 conference.
This year’s list revealed that Mellanox InfiniBand demonstrated up to 94 percent system utilization, only 6 percent under the theoretical limit, providing users with the best return on investment for their high-performance computing server and storage infrastructure. This year’s TOP500 list reveals that InfiniBand is the only growing industry-standard interconnect solution, increasing 25 percent to 152 systems, representing more than 30 percent of the TOP500. Mellanox ConnectX® InfiniBand adapters and switch systems based on its InfiniScale® III and IV switch silicon provide the scalable, low-latency, and power-efficient interconnect for the world’s fastest supercomputer and the majority of the top 100 systems. Mellanox end-to-end 40Gb/s InfiniBand solutions deliver the leading performance and highest Top10 system efficiency in the 10th ranked Jülich cluster.
Highlights of InfiniBand usage on the June 2009 TOP500 list include:
- Mellanox InfiniBand interconnect products connect the world’s fastest supercomputer, 4 of the top 10 most prestigious positions, and 9 of the top 20 systems
- Mellanox InfiniBand provides the highest system utilization, up to 94 percent, which is 50 percent higher than the best GigE-based system
- Mellanox 40Gb/s end-to-end solutions provide the highest system utilization on the top 10, 15% higher than the average top 20 efficiency
- All InfiniBand-based clusters (152 total supercomputers) use Mellanox solutions
- InfiniBand is the most used interconnect among the top 100 supercomputers with 59 systems, nearly 12 times the number of Gigabit Ethernet-based clusters and the number of proprietary high speed cluster interconnects
- The total number of InfiniBand-connected CPU cores on the list has grown from 606,000 in June 2008 to 1,040,000 in 2009 (72 percent yearly growth)
- InfiniBand is the only growing industry-standard clustered interconnect in the TOP500 with a 25 percent growth rate compared to June 2008
- Mellanox InfiniBand interconnect products present in the TOP500 are used by a diverse list of applications, from large-scale, high-performance computing to commercial technical computing and enterprise data centers
- The entry level for the TOP500 list is 17TFlops, 91% percent higher than the 8.9TFlops necessary to be on the June 2008 list
On Tuesday, May 26, the Research Center Jülich reached a significant milestone of German and European supercomputing with the inauguration of two new supercomputers: the supercomputer JUROPA and the fusion machine HPC FF. The symbolic start of the systems were triggered by the German Federal Minister for Education and Research, Prof. Dr. Annette Schavan, the Prime Minister of North Rhine-Westphalia, Dr. Jürgen Rüttgers, and Prof. Dr. Achim Bachem, Chairman of the Board of Directors at Research Center Jülich as well as high-ranking international guests from academia, industry and politics.
JUROPA (which stands for Juelich Research on Petaflop Architectures) will be used Pan-European-wide by more than 200 research groups to run their data-intensive applications. JUROPA is based on a cluster configuration of Sun Blade servers, Intel Nehalem processors, Mellanox 40Gb/s InfiniBand and Cluster Operation Software ParaStation from ParTec Cluster Competence Center GmbH. The system was jointly developed by experts of the Jülich Supercomputing Center and implemented with partner companies Bull, Sun, Intel, Mellanox and ParTec. It consists of 2,208 compute nodes with a total computing power of 207 Teraflops and was sponsored by the Helmholtz Community. Prof. Dr. Dr. Thomas Lippert, Head of Jülich Supercomputing Center, explains the HPC Installation in Jülich in the video below.
HPC-FF (High Performance Computing – for Fusion), drawn up by the team headed by Dr. Thomas Lippert, director of the Jülich Supercomputing Centre, was optimized and implemented together with the partner companies Bull, SUN, Intel, Mellanox and ParTec. This new best-of-breed system, one of Europe’s most powerful, will support advanced research in many areas such as health, information, environment, and energy. It consists of 1,080 computing nodes each equipped with two Nehalem EP Quad Core processors from Intel. Their total computing power of 101 teraflop/s corresponds, at the present moment, to 30th place in the list of the world’s fastest supercomputers. The combined cluster will achieve 300 teraflops/s computing power and will be included in the rating of the Top500 list, published this month at ISC’09 in Hamburg, Germany.
40Gb/s InfiniBand from Mellanox is used as the system interconnect. The administrative infrastructure is based on NovaScale R422-E2 servers from French supercomputer manufacturer Bull, who supplied the compute hardware and the SUN ZFS/Lustre Filesystem. The cluster operating system “ParaStation V5″ is supplied by Munich software company ParTec. HPC-FF is being funded by the European Commission (EURATOM), the member institutes of EFDA, and Forschungszentrum Jülich.
Complete System facts: 3288 compute nodes ; 79 TB main memory; 26304 cores; 308 Teraflops peak performance
I’m still pondering my take on Interop this year. It’s been a while since I’ve seen so many abandoned spaces on the show floor. Mind you most were 10×10 or 10×20 spots, but you could tell there were others who really went light on their presence. I saw one booth which had a 40×40 booth and just filled it with banner stands. Yikes! So nothing was really grabbing at me until I went to Fusion-io’s booth and saw the wall of monitors with a 1,000 videos playing on it at once.
FINALLY SOMETHING IMPRESSIVE!
Even more amazing, the videos were all being driven by a single PCIe card which had 1.2TB of SSD RAM on it. This one “ioSAN” card from Fusion-io completely saturated 16 servers (126 cpu cores)…and they were able to achieve this through the bandwidth performance and ultra low-latency of 20Gb/s InfiniBand via Mellanox’s ConnectX adapters. In fact, they told me the 20Gb/s InfiniBand connection would allow them to saturate even more servers, yet they only brought 16.
The video below, featuring Fusion-io’s CTO David Flynn, tells the complete story:
The ioSAN can be used as networked, server-attached storage or integrated into networked storage infrastructure, making fundamental changes to the enterprise storage area. This is a great example of how Mellanox InfiniBand is the enabling technology for next generation storage.