Linux Drivers

Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED)

Clustering using commodity servers and storage systems is seeing widespread deployments in large and growing markets such as high performance computing, Artificial Intelligence (AI), data warehousing, online transaction processing, financial services and large scale cloud deployments. To enable distributed computing transparently and with maximum efficiency, applications in these markets require the highest I/O bandwidth and lowest possible latency. These requirements are compounded with the need to support a large interoperable ecosystem of networking, virtualization, storage, and other applications and interfaces. The OFED from OpenFabrics Alliance (www.openfabrics.org) has been hardened through collaborative development and testing by major high performance I/O vendors. Mellanox OFED (MLNX_OFED) is a Mellanox tested and packaged version of OFED that supports two interconnect types using the same RDMA (remote DMA) and kernel bypass APIs called OFED verbs – InfiniBand and Ethernet. Up to 200Gb/s InfiniBand and RoCE (based on the RDMA over Converged Ethernet standard) over 10/25/40/50/100GbE are supported with OFED by Mellanox to enable OEMs and System Integrators to meet the needs end users in the said markets.

Linux Inbox Drivers

Mellanox Adapters' Linux VPI Drivers for Ethernet and InfiniBand are also available Inbox in all the major distributions, RHEL, SLES, Ubuntu and more. Inbox drivers enable Mellanox High performance solutions for Cloud, Artificial Intelligence, HPC, Storage, Financial Services and more with the Out of box experience of Enterprise grade Linux distributions.


View the matrix of MLNX_OFED driver versions vs. supported hardware and firmware for Mellanox products.


  • Virtual Protocol Interconnect (VPI) allows Mellanox ConnectX adapter family to run InfiniBand and Ethernet traffic simultaneously on two ports
  • Single software stack that operates across all available Mellanox InfiniBand and Ethernet devices and configurations such as mem-free, SDR/DDR/QDR/FDR/EDR/HDR, 10 /25/40/50/100/200 GbE, and PCI Express modes 3.0 and 4.0
  • Support for HPC applications for scientific research, AI, oil and gas exploration, car crash tests, bench marking etc. E.g., Fluent, LS-DYNA
  • Support for Data Center applications such as Oracle 11g/10g RAC, IBM DB2, Financial services applications such as IBM WebSphere LLM, Red Hat MRG, NYSE Data Fabric
  • Support for high-performance block storage applications utilizing RDMA benefits

Note: By downloading and installing MLNX_OFED package for Oracle Linux (OL) OS, you may be violating your operating system’s support matrix. Please consult with your operating system support before installing.

Note: MLNX_OFED LTS serves customers who would like to utilize one of the following:
  • ConnectX-3 Pro
  • ConnectX-3
  • Connect-IB
  • RDMA experimental verbs library (mlnx_lib)
For other use-cases, it is recommended to use the latest MLNX_OFED 5.x-x.x.x.x driver.

Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.