Mellanox OFED GPUDirect RDMA

The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the Mellanox HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. GPU Direct leverages PeerDirect RDMA and PeerDirect ASYNC™ capabilities of the Mellanox network adapters.

  • Avoid unnecessary system memory copies and CPU overhead by copying data directly to/from pinned GPU memory
  • Peer-To-Peer Transfers Between GPU device and Mellanox RDMA devices
  • Use high-speed DMA transfers to copy data between P2P devices
  • Eliminate CPU bandwidth and latency bottlenecks using direct memory access (DMA)
  • With GPUDirect RDMA, GPU memory can be used for Remote Direct Memory Access (RDMA) resulting in more efficient applications
  • Boost Message Passing Interface (MPI) Applications with zero-copy support
Driver Platform Systems Requirements
nvidia-peer-memory_1.1.tar.gz HCAs
GPUs
  • NVIDIA® Tesla™ / Quadro K-Series or Tesla™ / Quadro™ P-Series GPU
Software/Plugins

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.