Mellanox Socket Direct Adapters
Maximize Data Center Performance and Increase ROI
An innovative network adapter architecture—Mellanox Socket Direct®—enables direct PCIe access to multiple CPU sockets, eliminating network traffic having to traverse the inter-process bus and optimizing overall system performance.
Eliminate Traffic Bottlenecks
Socket Direct network adapters are available in several generic forms, including: a set of two PCIe cards with a connecting harness and the PCIe lanes split between the two cards; or, an OCP3.0 card (requires models that support Mellanox Multi-Host). Mellanox Socket Direct technology is also available in a single PCIe x16 card using a bifurcated PCIe slot comprised of two x8 interfaces. This unique solution is available from leading OEMs--contact Mellanox for additional information. Mellanox Socket Direct adapters enable several CPUs within a multi-socket server to connect directly to the network, each through its own dedicated PCIe interface. The result is extremely low latency and CPU utilization in addition to higher network throughput. Mellanox Socket Direct also improves Artificial Intelligence and Machine Learning application performance, as it enables native GPU-Direct® technologies.

Flexible Data Speeds
- ConnectX-6 Socket Direct cards provide HDR 200Gb/s or 200GbE ports over two PCIe Gen3 x16 slots.
- ConnectX-5 Socket Direct cards provide EDR 100Gb/s or 100GbE transmission rate over two PCIe Gen3 x8 slots.
- ConnectX-6 OCP3.0 cards provide HDR 200Gb/s or 200GbE ports over one PCIe Gen3/Gen4 x16 slot
- ConnectX-6 Dx OCP 3.0 cards provide a 100GbE port over two PCIe Gen4 x16 slots
Enhances Performance and Easy to Manage
Mellanox Socket Direct adapters can be connected to a BMC using MCTP over SMBus, or MCTP over PCIe, similar to a standard Mellanox PCIe stand-up adapter. Mellanox Socket Direct adapters can be configured transparently by the server’s management solution.
Understanding Mellanox Socket Direct and Mellanox Multi-Host Technologies
Mellanox Socket Direct technology utilizes the same underlying technology that enables Mellanox Multi-Host, however the motivation and server design is different in each case, as shown in the following table.
Feature. | Mellanox Multi-Host | Mellanox Socket Direct |
Motivation | Reduces CAPEX / OPEX | Improve performance |
Server Config | Individual servers; each with its own OS instance | Single server running single OS |
PCIe Signals | Individual PCIe Reset, Clock, etc. | Common PCIe Reset, Clock, etc. |
BMC | Supports individual BMCs | Single BMC |
Pre-boot | Individual pre-boot instance | Single pre-boot instance |
Download our Whitepaper
Ordering Part No. | Max. Speed | Ports | Connectors | ASIC & PCI Dev ID | PCI | Lanes |
Mellanox ConnectX®-6 Dx Ethernet | ||||||
MCX623435MN-CDAB | 100GbE | 1 | QSFP56 | ConnectX®-6 Dx | OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 | 2x8 |
Mellanox ConnectX®-6 VPI | ||||||
MCX653105A-EFAT | HDR100, EDR IB (100Gb/s) and 100GbE | 1 | QSFP56 | ConnectX®-6 | Socket Direct 3.0/4.0 x16, split into two x8 | 2x8 in a row |
MCX653106A-EFAT | HDR100, EDR IB (100Gb/s) and 100GbE | 2 | QSFP56 | ConnectX®-6 | Socket Direct 3.0/4.0 x16, split into two x8 | 2x8 in a row |
MCX654106A-ECAT | HDR100, EDR IB (100Gb/s) and 100GbE | 2 | QSFP56 | ConnectX®-6 | Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card | 2x16 |
MCX654105A-HCAT | HDR IB (200Gb/s) and 200GbE | 1 | QSFP56 | ConnectX®-6 | Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card | 2x16 |
MCX654106A-HCAT | HDR IB (200Gb/s) and 200GbE | 2 | QSFP56 | ConnectX®-6 | Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card | 2x16 |
Mellanox ConnectX®-5 VPI | ||||||
MCX556M-ECAT-S25 | EDR IB (100Gb/s) and 100GbE | 2 | QSFP28 | ConnectX®-5 | Socket Direct PCIe3.0 x8 + PCIe3.0x8 auxiliary card, 25cm harness | 2 x8 |
MCX556M-ECAT-S35A | EDR IB (100Gb/s) and 100GbE | 2 | QSFP28 | ConnectX®-5 | Socket Direct PCIe3.0 x8 + PCIe3.0x8 auxiliary card, 35cm harness | 2 x8 |