Mellanox Technologies ********************* OFED Driver for VMware(R) Infrastructure ESXi 5.5 & 6.0 Release Notes Driver Version 2.4.0.0 Last Updated: January 2016 =============================================================================== Table of Contents =============================================================================== 1. Overview 2. Key Features 3. Contents of Mellanox OFED ESXi 4. Supported Platforms and Operating Systems 5. Supported HCAs 6. Known Issues =============================================================================== 1. Overview =============================================================================== These are the release notes of "OFED Driver for VMware(R) vSphere 5.5 and 6.0". This document provides instructions on drivers for Mellanox Technologies ConnectX(R) based adapter cards with VMware ESXi Server environment. Mellanox OFED ESXi is a software stack based on the OpenFabrics (OFED) Linux stack adapted for VMware, that supports up to 56Gb/s InfiniBand (IB) or up to 40Gb/s Ethernet (ETH) and 2.5 or 5.0 GT/s PCI Express 2.0 and 3.0 uplinks to servers. =============================================================================== 2. Key New Features =============================================================================== - Compability with ESXi6.0 =============================================================================== 3. Contents of Mellanox OFED ESXi Package =============================================================================== MLNX-OFED-ESX package contains: o MLNX-OFED-ESX-2 4 0 0-10EM-550 0 0 1331820.zip o MLNX-OFED-ESX-2 4 0 0-10EM-600 0 0 2494585.zip Hypervisor bundles which contain the following kernel modules: - mlx_compat(Mellanox Technologies compatibility layer) - mlx4_core (ConnectX family low-level PCI driver) - mlx4_ib (ConnectX family InfiniBand driver) - ib_core (Kernel InfiniBand API) - ib_sa (InfiniBand subnet administration query support) - ib_mad (Kernel IB MAD API) - ib_ipoib (Mellanox Technologies IP-over-InfiniBand driver) - mlx4_en (Mellanox Technologies Ethernet driver) The driver package is distributed as an offline bundle (.zip file). =============================================================================== 4. Supported Platforms and Operating Systems =============================================================================== o CPU architectures: - x86_64 o ESX Hypervisor: - ESXi 5.5Ux - ESXi 6.0Ux o SR-IOV Virtual Machine Drivers: - MLNX_OFED 3.1-1.0.3 (SR-IOV Ethernet) - WinOF 4.95 (SR-IOV InfiniBand) =============================================================================== 5. Supported HCAs =============================================================================== This release supports Mellanox Technologies Ethernet NICs: - ConnectX-3 Pro: 2.36.5000 - ConnectX-3: 2.36.5000 Please note that older firmware versions were not tested with this release. =============================================================================== 6. Known Issues =============================================================================== 6.1 General Known Issues ------------------------------------------------------------------------------- o Unloading the driver is not supported by VMware. o In ESX6.0, the removal of the Mellanox Inbox (native) driver is required prior to installing this driver. o After changing module parameters values, reboot is required. o VPI configuration one port IB one port ETH is not supported. o Port type auto-sense is not supported. VPI cards' default port type is IB therefore, to set the port type to ETh you must set the port_type_array to 2. For further information, please refer to the User Manual section "Setting Up SR-IOV" o SR-IOV passthru network adapter is not supported o SR-IOV updates and status queries are not supported in the web client. Please refer to the User Manual for the proper use of SR-IOV. o VMs with SRIOV interface might not power on with "out of MSI-X vectors" message in vmkernel.log. To resolve this issue, need to add pciPassthru.maxMSIXvectors parameter to VMs configuration file. Maximum value allowed for this param is 31. We suggest to set the value according to the following equation: pciPassthru.maxMSIXvectors = + 2 For additional explanation refer to: http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.networking.doc%2FGUID-880A9270-807F-4F2A-B443-71FF01DCC61D.html On Windows VMs this configuration is required. See the follwoing VMware KB article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2032981 o The number of coinciding 40Gb ports is limited to 4 by VMware. For full details, please see: https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf 6.2. Ethernet Known Issues ------------------------------------------------------------------------------- o The maximum supported VFs in SR-IOV EN is 32. Number of adapters on a server with SR-IOV EN can be calculated as follow: (4 + min(number_of_cores, 16)*2 + number_of_vfs*3)*number_of_mellanox_adapters <= 128 o The maximum number of PV interfaces, including vmkernel adapters (each with different MAC) is 32 o SR-IOV EN can be enabled on Ethernet cards only. To set HCA as ETH use mlxconfig: mlxconfig -d set LINK_TYPE_P1=2 LINK_TYPE_P2=2 o Changing the ring size via ethtool or mlnx_en module parameters is currently not supported. o Windows guest over SR-IOV EN is not supported o Changing default VLAN during background traffic might cause traffic loss. o Setting 'mac' load balancing failover is not supported with multicast traffic. 6.3. IPoIB Known Issues ------------------------------------------------------------------------------- o IPv6 is not supported in the current release. o Triplet configuring for num_vfs module parameter (mlx4_core module) in IB SR-IOV is not supported. o Upgrade of the IPoIB driver is currently unsupported. To upgrade to the newest driver, uninstall the previous version and only then install the latest. o The "esxcli network sriovnic list" command is not supported. o OpenSM must be running before Windows VMs with virtual function are started, otherwise the virtual function will fail to startup. To resolve this issue, delay the start of all windows VMs with virtual function after OpenSM is running. 6.3.1 IPoIB Para Virtual Known Issues ------------------------------------------------------------------------------- o Packets might get dropped upon initial communication between interfaces. o TCP traffic may resume after a certain delay after the port is down/up. o Number of MAC addresses per host is limited to 400. o ibportstate utility (MLNX_OFED) usage for setting port down/up is not supported. o The minimum supported MTU value is 1500. o Nic teaming feature in not supported in the current release. o Poor TCP performance may occur in Linux virtual machines with LRO enabled. To solve the issue, please refer to the VMware website: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027511 o Migration/VMotion Known Issues - Frequent migration/VLAN changes may cause connectivity loss up to 1 minute - When using VGT, connectivity may be lost for up to 40 seconds after vmotion. To restore connectivity, send ping from the migrated VM. Workaround: To solve the issues above flash the ARP tables in all sources and targets 6.4 RoCE ------------------------------------------------------------------------------- o RoCEv2 is currently not supported. o RoCE can be configured and run only on Virtual Machines which are associated with SR-IOV EN Virtual Functions.