The OpenStack Summit is a four-day conference for developers, users, and administrators of OpenStack cloud software. Held every six months, the conference schedule rotates based on the OpenStack software release cycle. This week, the summit is being held in Tokyo, Japan at the Grand Prince International Convention Center.
Today, we had a common session (with Irena Berezovsky – Midokura, Livnat Peer – Red Hat) about Quality of services in Cloud. I presented a customer use case and talk about Mellanox NEO, Containers, Virtualization, Auto Provisioning and SR-IOV LAG.
Tomorrow is the last chance to visit Mellanox’s booth (S8) and see the 100Gbps Cloud Solution based on Spectrum, ConnectX-4 and Ceph RDMA. Make sure to stop by and talk with us! Here are some photos from today’s session along with the Mellanox booth:
Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides. This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.
One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services. Important features like high availability (HA) or VM migration can’t be done easily. Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.
As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.