Mellanox OFED GPUDirect RDMA Beta

Overview

The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the Mellanox HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network.

Please note : Any previous versions of OFED, MLNX_OFED, or any previous patches applied with the ALPHA releases of GPUDirect RDMA should not be installed on your system.

Currently, the BETA release for GPUDirect RDMA technology requires the following pre-requisites to be installed and operational:

GPU Adapters supported (required): NVIDIA® Tesla™ K-Series (K10, K20, K40) GPU Drivers supported (required): NVIDIA Linux x84 (AMD64/EM64T) Display Driver , Version 331.20 or later

Mellanox Interconnect Adapters supported (required): ConnectX-3, ConnectX-3 Pro, or Connect-IB

Mellanox OFED (required): Mellanox OFED 2.1 or later

Development tools (optional): NVIDIA® CUDA® 5.5 or later

  • Avoid unnecessary system memory copies and CPU overhead by copying data directly to/from pinned GPU memory
  • Peer-To-Peer Transfers Between GPU device and Mellanox RDMA devices
  • Use high-speed DMA transfers to copy data between P2P devices
  • Eliminate CPU bandwidth and latency bottlenecks using direct memory access (DMA)
  • With GPUDirect RDMA, GPU memory can be used for Remote Direct Memory Access (RDMA) resulting in more efficient applications
  • Boost Message Passing Interface (MPI) Applications with zero-copy support