RDMA over Converged Ethernet (RoCE) - An Efficient, Low-cost, Zero Copy Implementation

Overview

Remote Direct Memory Management (RDMA) is the remote memory management capability that allows server to server data movement directly between application memory without any CPU involvement. RDMA over Converged Ethernet (RoCE) is a mechanism to provide this efficient data transfer with very low latencies on lossless Ethernet networks. With advances in data center convergence over reliable Ethernet, ConnectX-2/ConnectX-3 EN with RoCE uses the proven and efficient RDMA transport to provide the platform for deploying RDMA technology in mainstream data center application at 10GbE and 40GbE link-speed. ConnectX-2/ConnectX-3 EN with its hardware offload support takes advantage of this efficient RDMA transport (InfiniBand) services over Ethernet to deliver ultra low latency for performance-critical and transaction intensive applications such as financial, data base, storage, and content delivery networks.

  • Utilize advances in lossless Ethernet (DCB) for an efficient RDMA over Ethernet
  • A low cost, low power solution for performance centric Data Center applications
  • Traffic Classification at the Link Layer (layer 2) improving network efficiency
  • Lowest latency of 1.3 microseconds on lossless Ethernet
  • RDMA Transport offload with zero copy for low CPU utilization
  • Improves performance in financial, datawarehouse, data mining, storage, data base, Web 2.0 and Business intelligence applications
  • Ethernet management infrastructure can be leveraged “as-is”
  • Single Ethernet wire for IPC, LAN and SAN to complete Ethernet convergence
  • IBTA 1.2.1 RoCE standards compliant
  • Supports IEEE 802.1Qau, 802.1Qbb and, 802.1Qaz (DCB) standards
  • OFED verbs compliant with OFED software stack interoperability
  • RoE utilizes mature RDMA transport layer based on IBTA specification
  • Traffic differentiation at Link Layer with IEEE defined ether-type
  • SNMP based network management with MIB II support
  • Interoperable with any industry standard 10GbE (DCB) switches in a large cluster