Storage Spaces Direct: If Not RDMA, Then What? If Not Mellanox, Then Who?

 
Enterprise, Storage,

Over the past couple years, we have witnessed significant architectural changes affecting modern data center storage systems. These changes have had a dramatic effect, as they have practically replaced traditional Storage Area Network (SAN), which has been the dominant solution for over a decade.

 

When analyzing the market trends that led to this change, it becomes very clear that virtualization is the main culprit. The SAN architecture was very efficient when only one workload was accessing the storage array, but it has become much less efficient in a virtualized environment in which different workloads arrive from different independent Virtual Machines (VMs).

 

To better understand this concept, let’s use a city’s traffic light system as an analogy to a data center’s data traffic. In this analogy, the cars are the data packets (coming in different sizes), and the traffic lights are the data switches. Before the city programs a traffic light’s control, it conducts a thorough study of the traffic patterns of that intersection and the surrounding area.

 

The traffic light is programmed accordingly, and it therefore controls the vehicular traffic in the most efficient way possible. However, if the same policy is used at another intersection, it will not be efficient and could create severe traffic jams, since the various intersections handle different vehicle types (cars, trucks, etc.) and different locations (school, industrial, etc.).

 

Applying the same logic to a virtualized environment helps explain why a storage system that has been optimized to efficiently support a specific workload will not be as effective under a mix of workloads.

 

Virtualization hasn’t been the only reason for the change. The need to store and process significantly larger amounts of data has created a corresponding need for a new architecture that is easy to scale, easy to manage, and less costly than the traditional SAN deployments.

 

This has led to the development of the Scale-out Software Defined Storage architecture, in which the storage is constructed by integrating the servers’ local storage devices into one big pool that provides all the required storage services.

 

The efficiency of this advanced architecture relies heavily on networking performance. This is where RDMA shines, since RDMA-enabled fabrics ease data traffic flow with significantly lower latency and by requiring significantly fewer CPU cycles than TCP/IP-based fabrics.

 

At the recent Ignite 2015 conference, Microsoft announced Storage Spaces Direct, its software-defined storage solution for private cloud using industry-standard servers with local storage.

 

Microsoft's windows Server 2016 Storge Spaces Direct Solution

Figure 1: Microsoft’s Windows Server 2016 Storage Spaces Direct Solution

 

The solution, which will be part of the upcoming Windows Server 2016, connects the Hyper-V clients to the Scale-Out File Server cluster (with the built-in SSD) by using SMB 3.0 over RDMA, a protocol mode known as SMB Direct.

 

One of the most impressive demos given at the show ran SMB Direct over 100GbE using the Mellanox ConnectX®-4 100Gb/s RoCE (RDMA over Converged Ethernet) NIC. The demo used two servers, a client, and a file server that were connected back-to-back with one 100Gb/s link.

 

The following table shows the results of direct-connect client-server configuration testing with SATA versus NVMe SSDs with RDMA enabled or disabled:

SATA versus NVMe SSDs with RDM

 

The 3 NVMe SSD configuration with RDMA enabled was the clear winner, with the highest throughput and lowest latency. Running over Mellanox’s RDMA enabled interconnect solution doubles the throughput, cuts the latency by 50%, and increases the CPU efficiency by more than 33%.

 

A short video showing the impact of RDMA networking on an SMB3 workload over 100Gb/s with RoCE:

 

 

Storage Spaces Direct leverages the increased performance that SMB Direct enables to accelerate access to storage. However, it’s not just about the performance. A white paper published by ESG Labs shows that compared to FC and iSCSI, SMB Direct also cuts the cost by 50%.

 

All of these advantages have been integrated into Windows Server 2016 Storage Spaces Direct and were demoed in a presentation given by Claus Joergensen at Ignite: “Enabling Private Cloud Storage Using Servers with Local Disks”.

 

The demonstration showed that when running Windows Server 2016’s new Storage Spaces Direct feature with Mellanox’s 56Gb/s RDMA-enabled end-to-end solution and Micron M500DC local SATA storage, the configuration delivered 1 million IOPS compare to less than 600K IIPs when running over TCP/IP (see the video at the 57 minute mark of the presentation).

 

Future configurations will run over Micron’s new NVMe SSD and use Mellanox 100Gb/s RoCE, which will further accelerate applications’ access to storage.

 

But even now, it is clear that RDMA-based networks already bring unmatched value to modern Scale-Out Software Defined Storage systems.

About Motti Beck

Motti Beck is Sr. Director Enterprise Market Development at Mellanox Technologies Inc. Before joining Mellanox Motti was a founder of BindKey Technologies an EDC startup that provided deep submicron semiconductors verification solutions and was acquired by DuPont Photomask and Butterfly Communications a pioneering startup provider of Bluetooth solutions that was acquired by Texas Instrument. Prior to that, he was a Business Unit Director at National Semiconductors. Motti hold B.Sc in computer engineering from the Technion – Israel Institute of Technology. Follow Motti on Twitter: @MottiBeck

Comments are closed.