Monthly Archives: April 2010

GPU-Direct Technology – Accelerating GPU based Systems

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, has made graphics accelerators a compelling platform for computationally demanding tasks in a wide variety of application domains. Due to the great computational power of the GPU, the GPGPU method has proven valuable in various areas of science and technology.

GPU based clusters are being used to perform compute intensive tasks, like finite element computations, Computational Fluids Dynamics, Monte-Carlo simulations etc. Several of the world leading supercomputers are using GPUs in order to achieve the desired performance. Since the GPUs provide high core count and floating point operations capability, a high-speed networking such as InfiniBand is required to connect between the GPU platforms, in order to provide the needed throughput and the lowest latency for the GPU to GPU communications.

While GPUs have been shown to provide worthwhile performance acceleration yielding benefits to both price/performance and power/performance, several areas of GPU based clusters could be improved in order to provide higher performance and efficiency. One of the main performance issues with deploying clusters consisting of multi-GPU nodes involves the interaction between the GPUs, or the GPU to GPU communication model. Prior to the GPU-Direct technology, any communication between GPUs had to involve the host CPU and required buffer copy. The GPU communication model required the CPU to initiate and manage memory transfers between the GPUs and the InfiniBand network. Each GPU to GPU communication had to follow the following steps:

  1. The GPU writes data to a host memory dedicated to the GPU
  2. The host CPU copies the data from the GPU dedicated host memory to host memory available for the InfiniBand devices to use for RDMA communications
  3. The InfiniBand device reads data from that open area and send it to the remote node

Gilad Shainer
Senior Director of HPC and Technical Marketing

InfiniBand Leads the Russian Top50 Supercomputers List; Connects 74 Percent, Including Seven of the Top10 Supercomputers

Announced last week, the Russia TOP50 lists the fastest computers in Russia ranked according to Linpack benchmark results.  This list provides an important tool for tracking usage trends in high-performance computing in Russia.

Mellanox 40Gb/s InfiniBand adapters and switches enable the fastest supercomputer on the Russian Top50 Supercomputer list with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most used interconnect solution, connecting 37 systems, including the top three systems and seven of the Top10. According to the Linpack benchmark, InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their high-performance computing server and storage infrastructure by demonstrating up to 92 percent efficiency. Nearly three quarters of the list, represented by leading research laboratories, universities, industrial companies and banks in Russia, rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability, and application performance.

Highlights of InfiniBand usage on the April 2009 Russia TOP50 list include:

  • Mellanox InfiniBand connects 74 percent of the Top50 list, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • Mellanox InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance – the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list – highlighting the  increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems); and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference.  I will be attending the conference, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparks
Sr. Director of Marketing Communications