Category Archives: InfiniBand

InfiniBand Leads the Russian Top50 Supercomputers List; Connects 74 Percent, Including Seven of the Top10 Supercomputers

Announced last week, the Russia TOP50 lists the fastest computers in Russia ranked according to Linpack benchmark results.  This list provides an important tool for tracking usage trends in high-performance computing in Russia.

Mellanox 40Gb/s InfiniBand adapters and switches enable the fastest supercomputer on the Russian Top50 Supercomputer list with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most used interconnect solution, connecting 37 systems, including the top three systems and seven of the Top10. According to the Linpack benchmark, InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their high-performance computing server and storage infrastructure by demonstrating up to 92 percent efficiency. Nearly three quarters of the list, represented by leading research laboratories, universities, industrial companies and banks in Russia, rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability, and application performance.

Highlights of InfiniBand usage on the April 2009 Russia TOP50 list include:

  • Mellanox InfiniBand connects 74 percent of the Top50 list, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • Mellanox InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance – the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list – highlighting the  increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems); and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference.  I will be attending the conference, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparks
Sr. Director of Marketing Communications

Partners Healthcare Cuts Latency of Cloud-based Storage Solution Using Mellanox InfiniBand Technology

Interesting article just came out from Dave Raffo at SearchStorage.com. I have a quick summary below but you should certainly read the full article here: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners recognized early on that a Cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners network.

Initially, Partners Healthcare chose Ethernet as the network transport technology. As demand grew the solution began hitting significant performance bottlenecks, particularly during read/write of 100’s of thousands of small files. The issue was found to lie with the interconnect—Ethernet created problems due to its high natural latency. In order to provide a scalable low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners experienced roughly two orders of magnitude faster read times. “One user had over 1,000 files, but only took up 100 gigs or so,”said Brent Richter corporate manager for enterprise research infrastructure and services, Partners HealthCare System.”Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” he said.

Richter said the final price tag came to about $1 per gigabyte.

By integrating Mellanox InfiniBand into the storage solution, Partners Healthcare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Till next time,

Brian Sparks

Sr. Director, Marketing Communication

Interconnect analysis: InfiniBand and 10GigE in High-Performance Computing

InfiniBand and Ethernet are the leading interconnect solutions for connecting servers and storage systems in high-performance computing and in enterprise (virtualized or not) data centers. Recently, the HPC Advisory Council has put together the most comprehensive database for high-performance computing applications to help users understand the performance, productivity, efficiency and scalability differences between InfiniBand and 10 Gigabit Ethernet.

In summary, there are a large number of HPC applications that need the lowest possible latency for best performance or the highest bandwidth (for example Oil&Gas applications as well as weather related applications). There are some HPC applications that are not latency sensitive. For example, gene sequencing and some bioinformatics applications are not sensitive to latency and scale well with TCP-based networks including GigE and 10GigE. For HPC converged networks, putting HPC message passing traffic and storage traffic on a single TCP network may not provide enough data throughput for either. Finally, there is a number of examples that show 10GigE has limited scalability for HPC applications and InfiniBand proves to be a better performance, price/performance, and power solution than 10GigE.

The complete report can be found under the HPC Advisory Council case studies or by clicking here.

ROI through efficiency and utilization

High-performance computing provides an invaluable role in research, product development and education. It helps accelerate time to market, and provides significant cost reductions in product development and tremendous flexibility. One strength in high-performance computing is the ability to achieve best sustained performance by driving the CPU performance towards its limits. Over the past decade, high-performance computing has migrated from supercomputers to commodity clusters. More than eighty percent of the world’s Top500 compute system installations in June 2009 were clusters. The driver for this move appears to be a combination of Moore’s Law (enabling higher performance computers at lower costs) and the ultimate drive for the best cost/performance and power/performance. Cluster productivity and flexibility are the most important factors for a cluster’s hardware and software configuration.

A deeper examination of the world’s Top500 systems based on commodity clusters shows two main interconnect solutions that are being used to connect the servers for creating those compute powerful systems – InfiniBand and Ethernet. If we divide the systems according to the interconnect family, we will see that the same CPUs, memory speed and other settings are common between the two groups. The only difference between the two groups, besides the interconnect, is the system efficiency, or how many of CPU cycles can be dedicated to the application work, and how many of them will be wasted. The below graph list the systems according to their interconnect setting, and their measured efficiency. 

Top500 Interconnect Efficiency
Top500 Interconnect Efficiency

As seen, systems connected with Ethernet achieves an average 50% efficiency, which means that 50% of the CPU cycles are wasted on non-application work or are idle, waiting for data to arrive.  Systems connected with InfiniBand achieve an above 80% efficiency average, which means that less than 20% of the CPU cycles are wasted. Moreover, the latest InfiniBand based systems have demonstrated up to 94% efficiency (the best Ethernet connected systems demonstrated 63% efficiency).

People might argue that the Linpack benchmark is not the best benchmark for measuring parallel application efficiency, and does not fully utilize the network. The graph results are a clear indication that even for the Linpack application, the network does make a difference, and for better parallel application, the gap will be much higher.

When choosing the system setting, with the notion of maximizing return on investment, one needs to make sure no artificial bottlenecks will be created. Multi-core platforms, parallel applications, large databases etc require fast data exchange and lots of it. Ethernet can become the system bottleneck due to latency/bandwidth and CPU overhead due to the TCP/UDP processing (TOE solutions introduce other issues, sometime more complicated, but this is a topic for another blog) and reduce the system efficiency to 50%. This means that half of the compute system is wasted, and just consumes power and cooling. Same performance capability could have been achieved with half of the servers if they were connected with InfiniBand. More data on different application performance, productivity and ROI, can be found at the HPC Advisory Council web site, under content/best practices.

While InfiniBand will demonstrate higher efficiency and productivity, there are several ways to increase Ethernet efficiency. One of them is optimizing the transport layer to provide zero copy and lower CPU overhead (not by using TOE solutions, as those introduce single points of failure in the system). This capability is known as LLE (low latency Ethernet). More on LLE will be discussed in future blogs.

Gilad Shainer, Director of Technical Marketing
gilad@mellanox.com

Thanks for coming to see us at VMworld

VMworld was everything we expected and more. The traffic was tremendous and we had a lot of excitement and buzz in our booth (especially after we won the Best of VMworld in the Cloud Computing category). Just in case you were unable to sit through one of Mellanox’s presentations, or from one of our partners (Xsigo, HP, Intalio, RNA Networks, and OpenFabrics Alliance), we went ahead and video taped the sessions, and have posted them below.

 

 Mellanox – F.U.E.L. Efficient Virtualized Data Centers

 

 Mellanox – On-Demand Network Services

 

 Intalio – Private Cloud Platform

 

 HP BladeSystem and ExSO SL-Series

 

 Xsigo – How to Unleash vSphere’s Full Potential with Xsigo Virtual I/O

 

 RNA Networks – Virtual Memory

 

 OpenFabrics Alliance – All things Virtual with OpenFabrics and IB

Winning Gold at VMworld

We were very excited to announce today that the Intalio|Cloud Appliance, accelerated by Mellanox 40Gb/s InfiniBand, has received a Best of VMworld 2009 award in the Cloud Computing Technologies category. It’s not too late to see what the fuss is all about. The Intalio|Cloud Appliance is being demonstrated at our booth (#2220) at VMworld in San Francisco. This is the last day to see us!

Breaking the Cloud “I/O Barrier”

Mellanox and LINBIT just announced a collaboration with Logicworks.

Together, the companies are working to develop a high-performance replication system for Logicworks’ customers. LINBIT DRBD open source technology combined with the InfiniBand fabric from Mellanox will lower costs and make it possible to achieve unprecedented levels of input/output (I/O) performance, leading to improved cloud-based storage management and disaster recovery capabilities. The adoption of InfiniBand for the cloud-based system will provide Logicworks’ customers with the unparalleled performance that is critical for hosting latency sensitive applications.

“By utilizing both LINBIT’s DRBD technology and Mellanox’s InfiniBand interconnects, Logicworks’ customers will be able to take their cloud-based applications to the next level,” said Bart Grantham, R&D vice president, Logicworks. “We are eager to build on our relationship with LINBIT and excited to be among the first in the industry to offer such a solution to our customers.”

TOP500 33rd List Highlights

Started in 1993, the TOP500 lists the fastest computers used today, ranked according to Linpack benchmark results. Published twice a year, the TOP500 list provides an important tool for tracking usage trends in high-performance computing. The 33rd TOP500 List was released in Hamburg, Germany, during the ISC’09 conference.

This year’s list revealed that Mellanox InfiniBand demonstrated up to 94 percent system utilization, only 6 percent under the theoretical limit, providing users with the best return on investment for their high-performance computing server and storage infrastructure. This year’s TOP500 list reveals that InfiniBand is the only growing industry-standard interconnect solution, increasing 25 percent to 152 systems, representing more than 30 percent of the TOP500. Mellanox ConnectX® InfiniBand adapters and switch systems based on its InfiniScale® III and IV switch silicon provide the scalable, low-latency, and power-efficient interconnect for the world’s fastest supercomputer and the majority of the top 100 systems. Mellanox end-to-end 40Gb/s InfiniBand solutions deliver the leading performance and highest Top10 system efficiency in the 10th ranked Jülich cluster.

Highlights of InfiniBand usage on the June 2009 TOP500 list include:

– Mellanox InfiniBand interconnect products connect the world’s fastest supercomputer, 4 of the top 10 most prestigious positions, and 9 of the top 20 systems
– Mellanox InfiniBand provides the highest system utilization, up to 94 percent, which is 50 percent higher than the best GigE-based system
– Mellanox 40Gb/s end-to-end solutions provide the highest system utilization on the top 10, 15% higher than the average top 20 efficiency
– All InfiniBand-based clusters (152 total supercomputers) use Mellanox solutions
– InfiniBand is the most used interconnect among the top 100 supercomputers with 59 systems, nearly 12 times the number of Gigabit Ethernet-based clusters and the number of proprietary high speed cluster interconnects
– The total number of InfiniBand-connected CPU cores on the list has grown from 606,000 in June 2008 to 1,040,000 in 2009 (72 percent yearly growth)
– InfiniBand is the only growing industry-standard clustered interconnect in the TOP500 with a 25 percent growth rate compared to June 2008
– Mellanox InfiniBand interconnect products present in the TOP500 are used by a diverse list of applications, from large-scale, high-performance computing to commercial technical computing and enterprise data centers
– The entry level for the TOP500 list is 17TFlops, 91% percent higher than the 8.9TFlops necessary to be on the June 2008 list

Full analysis of the TOP500 can be found HERE.

Inauguration of 1st European Petaflop Computer in Jülich, Germany

On Tuesday, May 26, the Research Center Jülich reached a significant milestone of German and European supercomputing with the inauguration of two new supercomputers: the supercomputer JUROPA and the fusion machine HPC FF. The symbolic start of the systems were triggered by the German Federal Minister for Education and Research, Prof. Dr. Annette Schavan, the Prime Minister of North Rhine-Westphalia, Dr. Jürgen Rüttgers, and Prof. Dr. Achim Bachem, Chairman of the Board of Directors at Research Center Jülich as well as high-ranking international guests from academia, industry and politics.

JUROPA (which stands for Juelich Research on Petaflop Architectures) will be used Pan-European-wide by more than 200 research groups to run their data-intensive applications. JUROPA is based on a cluster configuration of Sun Blade servers, Intel Nehalem processors, Mellanox 40Gb/s InfiniBand and Cluster Operation Software ParaStation from ParTec Cluster Competence Center GmbH. The system was jointly developed by experts of the Jülich Supercomputing Center and implemented with partner companies Bull, Sun, Intel, Mellanox and ParTec. It consists of 2,208 compute nodes with a total computing power of 207 Teraflops and was sponsored by the Helmholtz Community. Prof. Dr. Dr. Thomas Lippert, Head of Jülich Supercomputing Center, explains the HPC Installation in Jülich in the video below.

HPC-FF (High Performance Computing – for Fusion), drawn up by the team headed by Dr. Thomas Lippert, director of the Jülich Supercomputing Centre, was optimized and implemented together with the partner companies Bull, SUN, Intel, Mellanox and ParTec. This new best-of-breed system, one of Europe’s most powerful, will support advanced research in many areas such as health, information, environment, and energy. It consists of 1,080 computing nodes each equipped with two Nehalem EP Quad Core processors from Intel. Their total computing power of 101 teraflop/s corresponds, at the present moment, to 30th place in the list of the world’s fastest supercomputers. The combined cluster will achieve 300 teraflops/s computing power and will be included in the rating of the Top500 list, published this month at ISC’09 in Hamburg, Germany.

40Gb/s InfiniBand from Mellanox is used as the system interconnect. The administrative infrastructure is based on NovaScale R422-E2 servers from French supercomputer manufacturer Bull, who supplied the compute hardware and the SUN ZFS/Lustre Filesystem. The cluster operating system “ParaStation V5″ is supplied by Munich software company ParTec. HPC-FF is being funded by the European Commission (EURATOM), the member institutes of EFDA, and Forschungszentrum Jülich.

Complete System facts: 3288 compute nodes ; 79 TB main memory; 26304 cores; 308 Teraflops peak performance