Monthly Archives: May 2010

The biggest winner of the new June 2010 Top500 Supercomputers list? InfiniBand!

Published twice a year, the Top500 supercomputers list ranks the world fastest supercomputers and provides a great indication for HPC market trends, usage models and a tool for future predictions. The 35th release of the Top500 list was just published and according to the new results InfiniBand has become the de-facto interconnect technology for high performance computing.

What wasn’t said on InfiniBand from the competitor world? Too many time I have heard that InfiniBand is dead and that Ethernet is the killer. I am just sitting in my chair and laughing. InfiniBand is the only interconnect that is growing on the Top500 list, more than 30% growth year over year (YoY) and it is growing by continuing to uproot Ethernet and the proprietary solutions. Ethernet is 14% down YoY and it has become very difficult to spot a proprietary clustered interconnect…  Even more, in the hard core of HPC, the Top100, 64% of the systems are InfiniBand and are using solutions from Mellanox. InfiniBand is definitely proven to provide the needed scalability, efficiency and performance, and to really deliver the highest CPU or GPU availability to the user or to the applications. Connecting 208 systems from the list is only steps away from connecting the majority of the systems.

What makes InfiniBand so strong? The fact that it solves issues and does not migrate them to other parts of the systems. In a balanced HPC system, each components needs to do its work, and not rely on other components to do overhead tasks. Mellanox is doing a great job in providing solutions that offload all the communications and can provide the needed accelerations for the CPU or GPU, and maximize the CPU/GPU cycles for the applications. The collaborations with NVIDIA on the NVIDA GPUDirect, Mellanox CORE-Direct and so forth are just few examples.

The GPUDIrect is a great example on how Mellanox can offload the CPU from being involved in the GPU-to-GPU communications. No other InfiniBand vendor can do it without using Mellanox technology. GPUDirect requires network offloading or it does not work. Simple. When you want to offload the CPU from being involved in the GPU to GPU communications, and your interconnect needs the CPU to do the transports (since it is an onloading solution), the CPU is involved in every GPU transaction. Only offloading interconnects, such as Mellanox InfiniBand can really deliver the benefits of the GPUDirect.

If you want more information on the GPUDirect and other solutions, feel free to drop a note to hpc@mellanox.com.

Gilad

Visit Mellanox at ISC’10

It’s almost time for ISC’10 in Hamburg, Germany (May 31-June 3), please stop by and visit Mellanox Technologies booth (#331) to learn more about how our products deliver market-leading bandwidth, high-performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution.  

Mellanox’s end-to-end 40Gb/s InfiniBand connectivity products deliver the industry’s leading CPU efficiency rating on the TOP500. Come see our application acceleration and offload technologies that decrease run time and increase cluster productivity.

Hear from our HPC Industry Exports

Exhibitor Forum Session – Tuesday, June 1, 9:40AM – 10:10AM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing / Michael Kagan, CTO

HOT SEAT SESSION – Tuesday, June 1, 3:15PM – 3:30PM

Speaking: Michael Kagan, CTO

JuRoPa breakfast Session – Wednesday, June 2, 7:30AM – 8:45AM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing / Michael Kagan, CTO

“Low Latency, High Throughput, RDMA & the Cloud In-Between” – Wednesday, June 2, 10:00AM – 10:30AM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing

“Collectives Offloads for Large Scale Systems” – Thursday, June 3, 11:40AM – 12:20PM

Speaking: Gilad Shainer, Mellanox Technologies; Prof. Dr. Richard Graham, Oak Ridge National Laboratory

“RoCE – New Concept of RDMA over Ethernet” – Thursday, June 3, 12:20PM – 1:00PM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing and Bill Lee, Sr. Product Marketing Manager

Mellanox Scalable HPC Solutions with NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance and Efficiency

Mellanox announced the immediate availability of NVIDIA GPUDirect™ technology with Mellanox ConnectX®-2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today’s fastest high-performance computing clusters.  Read the entire press release here:

Supporting Resources:

Paving The Road to Exascale – Part 1 of many

1996 was the year when the world saw the first Teraflops system. 12 years after, the first Petaflop system was built. It took the HPC world 12 years to increase the performance by a factor of 1000. Exascale computing, another performance jump by a factor of 1000 will not take another 12 years. Expectations indicate that we will see the first Exascale system in the year 2018, only 10 years after the introduction of the Petaflop system. How do we get to the Exascale system is a good question, but we definitely put some guidelines on how to do it right. Since there is much to write on this subject, this will probably take multiple blog posts, and we have time till 2018…  :)

Here are the items that I have in mind as overall guidelines:

-  Dense computing – we can’t populate Earth with servers as we need some space for living… so dense solutions will need to be built – packing as many cores as possible in a single rack. This is a task for the Dell folks…  :)

-  Power efficiency – energy is limited, and today data centers already consume too much power. Apart from alternative energy solutions, the Exascale systems will need to be energy efficient, and this covers all of the systems components – CPUs, memory, networking. Every Watt is important.

-  Many-many cores – CPU / GPU, as much as possible and be sure, software will use them all

-  Offloading networks – every Watt is important, every flop needs to be efficient. CPU/GPU availability will be critical in order to achieve the performance goals. No one can afford wasting cores on non-compute activities.

-  Efficiency – balanced systems, no jitters, no noise, same order of magnitude of latency everywhere – between CPUs, between GPUs, between end-points

-  Ecosystem/partnership is a must – no one can do it by himself.

In future posts I will expand on the different guidelines, and definitely welcome your feedback.

————————————————————————-
Gilad Shainer
Senior Director, HPC and Technical Computing
gilad@mellanox.com