Yearly Archives: 2010

National Supercomputing Centre in Shenzhen (NSCS) – #2 on June 2010 Top500 list

I had the pleasure to be little bit involved in the creation of the fastest supercomputer in Asia, and the second fastest supercomputer in the world – the Dawning “Nebulae” Petaflop Supercomputer at SIAT. If we look on the peak flops capacity of the system – nearly 3 Petaflops, it is the largest supercomputer in the world. I visited the supercomputer site in April and saw how fast it was assembled. It took around 3 weeks to get it up and running – amazing, well, this is one of the benefits of using cluster architecture instead of the expensive proprietary systems. The first picture by the way was taken during the system setup in Shenzhen.

 

 

 

 

 

 

 

 

 

The system includes 5200 Dawning TC3600 Blades, each with NVIDIA Fermi GPU to provide 120K cores, all connected with Mellanox ConnectX InfiniBand QDR adapters, IS5000 switches and the fabric management. It is the 3rd system in the world to provide more than sustained Petaflop performance (after Roadrunner and Jaguar). Unlike Jaguar (from Cray) that requires 20K nodes to reach the required performance, Nebulae does it with only 5.2K nodes – reducing the needed real-estate etc, making is much more cost effective. It is yet another prove that commodity-based supercomputers can deliver better performance, cost/performance and other x/performance metrics compared to the proprietary systems. As GPUs gain higher popularity, we also witness the effort that is being done to create and port the needed applications to GPU-based environments, which will bring a new era of GPU computing. It is clear that GPUs will drive the next phase of supercomputers, and of course the new speeds and feeds of the interconnect solutions (such as the IBTA’s new specifications for the FDR/EDR InfiniBand speeds).

The second picture was taken at the ISC’10 conference, after the Top500 award ceremony. You can see the Top500 certificates…

 

 

 

 

 

 

 

 

 

Regards,

Gilad Shainer
Shainer@mellanox.com

Paving The Road to Exascale – Part 2 of many

In the introduction for the “Paving the road to Exascale” series of posts (part 1), one of the items I mentioned was the “many many cores, CPU or GPUs”.  The basic performance of a given system is being measured by flops. Each CPU/GPU is capable for X amount of flops (which can be calculated as number of parallel operations * frequency * cores for example), and the sum of all of them in a given system gives you the maximum compute capability of the system. How much you can really utilize for your application depends on the system design, memory bandwidth, interconnect etc. On the Top500 list, you can see per each of the systems listed, what the maximum amount of flops is, and what is the effective or measured performance using the Linpack benchmark.

In order to achieve the increasing performance targets (we are talking on paving the road to Exascale….) we need to have as many cores as possible. As we all witness, GPUs have become the most cost-effective compute element, and the natural choice for bringing the desired compute capability in the next generation of supercomputers. A simple comparison shows that with a proprietary design, such as a Cray machine, one needs around 20K nodes to achieve Petascale computing, and using GPUs (assuming one per server). 5K nodes will be enough to achieve a similar performance capability – best cost effective solution.

So, now that we starting to plug in more and more GPUs into the new supercomputers, there are two things that we need to take care of – one, is to start working on the application side, and port applications to use parallel GPU computation (this is a subject for a whole new blog) and second, to make sure the communications between the GPU is as effective as possible. For the later, we have saw the recent announcements from NVIDIA and Mellanox on creating a new interface, called GPUDirect, that enables a better and more efficient communication interface between the GPUs and the InfiniBand interconnect. The new interface eliminates the CPU involvement from the GPU communications data path, using the host memory as the medium between the GPU and the InfiniBand adapter. One needs to be aware, that the GPUDirect solution requires network offloading capability to completely eliminate the CPU from being involved in the data path, as if the network requires CPU cycles to send and receive traffic, the CPU will still be involved in the data path! Once you eliminate the CPU from the GPU data path, you can reduce the GPU communications by 30%.

We will be seeing more and more optimizations for GPU communications on high speed networks. The end goal is of course to provide local system latencies for remote GPUs, and with that ensure the maximum utilization of the GPU’s flops capability.

Till next time,

Gilad Shainer
shainer@mellanox.com

The biggest winner of the new June 2010 Top500 Supercomputers list? InfiniBand!

Published twice a year, the Top500 supercomputers list ranks the world fastest supercomputers and provides a great indication for HPC market trends, usage models and a tool for future predictions. The 35th release of the Top500 list was just published and according to the new results InfiniBand has become the de-facto interconnect technology for high performance computing.

What wasn’t said on InfiniBand from the competitor world? Too many time I have heard that InfiniBand is dead and that Ethernet is the killer. I am just sitting in my chair and laughing. InfiniBand is the only interconnect that is growing on the Top500 list, more than 30% growth year over year (YoY) and it is growing by continuing to uproot Ethernet and the proprietary solutions. Ethernet is 14% down YoY and it has become very difficult to spot a proprietary clustered interconnect…  Even more, in the hard core of HPC, the Top100, 64% of the systems are InfiniBand and are using solutions from Mellanox. InfiniBand is definitely proven to provide the needed scalability, efficiency and performance, and to really deliver the highest CPU or GPU availability to the user or to the applications. Connecting 208 systems from the list is only steps away from connecting the majority of the systems.

What makes InfiniBand so strong? The fact that it solves issues and does not migrate them to other parts of the systems. In a balanced HPC system, each components needs to do its work, and not rely on other components to do overhead tasks. Mellanox is doing a great job in providing solutions that offload all the communications and can provide the needed accelerations for the CPU or GPU, and maximize the CPU/GPU cycles for the applications. The collaborations with NVIDIA on the NVIDA GPUDirect, Mellanox CORE-Direct and so forth are just few examples.

The GPUDIrect is a great example on how Mellanox can offload the CPU from being involved in the GPU-to-GPU communications. No other InfiniBand vendor can do it without using Mellanox technology. GPUDirect requires network offloading or it does not work. Simple. When you want to offload the CPU from being involved in the GPU to GPU communications, and your interconnect needs the CPU to do the transports (since it is an onloading solution), the CPU is involved in every GPU transaction. Only offloading interconnects, such as Mellanox InfiniBand can really deliver the benefits of the GPUDirect.

If you want more information on the GPUDirect and other solutions, feel free to drop a note to hpc@mellanox.com.

Gilad

Visit Mellanox at ISC’10

It’s almost time for ISC’10 in Hamburg, Germany (May 31-June 3), please stop by and visit Mellanox Technologies booth (#331) to learn more about how our products deliver market-leading bandwidth, high-performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution.  

Mellanox’s end-to-end 40Gb/s InfiniBand connectivity products deliver the industry’s leading CPU efficiency rating on the TOP500. Come see our application acceleration and offload technologies that decrease run time and increase cluster productivity.

Hear from our HPC Industry Exports

Exhibitor Forum Session – Tuesday, June 1, 9:40AM – 10:10AM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing / Michael Kagan, CTO

HOT SEAT SESSION – Tuesday, June 1, 3:15PM – 3:30PM

Speaking: Michael Kagan, CTO

JuRoPa breakfast Session – Wednesday, June 2, 7:30AM – 8:45AM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing / Michael Kagan, CTO

“Low Latency, High Throughput, RDMA & the Cloud In-Between” – Wednesday, June 2, 10:00AM – 10:30AM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing

“Collectives Offloads for Large Scale Systems” – Thursday, June 3, 11:40AM – 12:20PM

Speaking: Gilad Shainer, Mellanox Technologies; Prof. Dr. Richard Graham, Oak Ridge National Laboratory

“RoCE – New Concept of RDMA over Ethernet” – Thursday, June 3, 12:20PM – 1:00PM

Speaking: Gilad Shainer, Sr. Director of HPC Marketing and Bill Lee, Sr. Product Marketing Manager

Mellanox Scalable HPC Solutions with NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance and Efficiency

Mellanox announced the immediate availability of NVIDIA GPUDirect™ technology with Mellanox ConnectX®-2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today’s fastest high-performance computing clusters.  Read the entire press release here:

Supporting Resources:

Paving The Road to Exascale – Part 1 of many

1996 was the year when the world saw the first Teraflops system. 12 years after, the first Petaflop system was built. It took the HPC world 12 years to increase the performance by a factor of 1000. Exascale computing, another performance jump by a factor of 1000 will not take another 12 years. Expectations indicate that we will see the first Exascale system in the year 2018, only 10 years after the introduction of the Petaflop system. How do we get to the Exascale system is a good question, but we definitely put some guidelines on how to do it right. Since there is much to write on this subject, this will probably take multiple blog posts, and we have time till 2018…  :)

Here are the items that I have in mind as overall guidelines:

-  Dense computing – we can’t populate Earth with servers as we need some space for living… so dense solutions will need to be built – packing as many cores as possible in a single rack. This is a task for the Dell folks…  :)

-  Power efficiency – energy is limited, and today data centers already consume too much power. Apart from alternative energy solutions, the Exascale systems will need to be energy efficient, and this covers all of the systems components – CPUs, memory, networking. Every Watt is important.

-  Many-many cores – CPU / GPU, as much as possible and be sure, software will use them all

-  Offloading networks – every Watt is important, every flop needs to be efficient. CPU/GPU availability will be critical in order to achieve the performance goals. No one can afford wasting cores on non-compute activities.

-  Efficiency – balanced systems, no jitters, no noise, same order of magnitude of latency everywhere – between CPUs, between GPUs, between end-points

-  Ecosystem/partnership is a must – no one can do it by himself.

In future posts I will expand on the different guidelines, and definitely welcome your feedback.

————————————————————————-
Gilad Shainer
Senior Director, HPC and Technical Computing
gilad@mellanox.com

GPU-Direct Technology – Accelerating GPU based Systems

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, has made graphics accelerators a compelling platform for computationally demanding tasks in a wide variety of application domains. Due to the great computational power of the GPU, the GPGPU method has proven valuable in various areas of science and technology.

GPU based clusters are being used to perform compute intensive tasks, like finite element computations, Computational Fluids Dynamics, Monte-Carlo simulations etc. Several of the world leading supercomputers are using GPUs in order to achieve the desired performance. Since the GPUs provide high core count and floating point operations capability, a high-speed networking such as InfiniBand is required to connect between the GPU platforms, in order to provide the needed throughput and the lowest latency for the GPU to GPU communications.

While GPUs have been shown to provide worthwhile performance acceleration yielding benefits to both price/performance and power/performance, several areas of GPU based clusters could be improved in order to provide higher performance and efficiency. One of the main performance issues with deploying clusters consisting of multi-GPU nodes involves the interaction between the GPUs, or the GPU to GPU communication model. Prior to the GPU-Direct technology, any communication between GPUs had to involve the host CPU and required buffer copy. The GPU communication model required the CPU to initiate and manage memory transfers between the GPUs and the InfiniBand network. Each GPU to GPU communication had to follow the following steps:

  1. The GPU writes data to a host memory dedicated to the GPU
  2. The host CPU copies the data from the GPU dedicated host memory to host memory available for the InfiniBand devices to use for RDMA communications
  3. The InfiniBand device reads data from that open area and send it to the remote node

Gilad Shainer
Senior Director of HPC and Technical Marketing

InfiniBand Leads the Russian Top50 Supercomputers List; Connects 74 Percent, Including Seven of the Top10 Supercomputers

Announced last week, the Russia TOP50 lists the fastest computers in Russia ranked according to Linpack benchmark results.  This list provides an important tool for tracking usage trends in high-performance computing in Russia.

Mellanox 40Gb/s InfiniBand adapters and switches enable the fastest supercomputer on the Russian Top50 Supercomputer list with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most used interconnect solution, connecting 37 systems, including the top three systems and seven of the Top10. According to the Linpack benchmark, InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their high-performance computing server and storage infrastructure by demonstrating up to 92 percent efficiency. Nearly three quarters of the list, represented by leading research laboratories, universities, industrial companies and banks in Russia, rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability, and application performance.

Highlights of InfiniBand usage on the April 2009 Russia TOP50 list include:

  • Mellanox InfiniBand connects 74 percent of the Top50 list, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • Mellanox InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance – the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list – highlighting the  increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems); and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference.  I will be attending the conference, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparks
Sr. Director of Marketing Communications

Oracle CEO Sees Expansion of InfiniBand

During Oracle’s recent earnings conference call, Oracle CEO Larry Ellison noted that the Oracle Sun Exadata – with Mellanox InfiniBand – continues to gain market adoption with its stunning database and transaction performance at over 10X of that of its competitors. Ellison also spoke to Oracle’s intention to port additional middleware and applications over the InfiniBand network, and across its wide array of server and storage system products lines through its Sun acquisition further expanding the use of InfiniBand technology.

Mellanox’s technology, leveraged in Oracle-based server and storage systems, continues to expand in enterprise applications for Tier 1 customers, providing these end-users with the lowest latency performance and highest return-on-investment for their most commonly-used business applications.

Partners Healthcare Cuts Latency of Cloud-based Storage Solution Using Mellanox InfiniBand Technology

Interesting article just came out from Dave Raffo at SearchStorage.com. I have a quick summary below but you should certainly read the full article here: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners recognized early on that a Cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners network.

Initially, Partners Healthcare chose Ethernet as the network transport technology. As demand grew the solution began hitting significant performance bottlenecks, particularly during read/write of 100’s of thousands of small files. The issue was found to lie with the interconnect—Ethernet created problems due to its high natural latency. In order to provide a scalable low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners experienced roughly two orders of magnitude faster read times. “One user had over 1,000 files, but only took up 100 gigs or so,”said Brent Richter corporate manager for enterprise research infrastructure and services, Partners HealthCare System.”Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” he said.

Richter said the final price tag came to about $1 per gigabyte.

By integrating Mellanox InfiniBand into the storage solution, Partners Healthcare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Till next time,

Brian Sparks

Sr. Director, Marketing Communication