NVMe Over Fabrics Standard is Released

 
Adapters, Ethernet, InfiniBand, Storage, Switches, , , , , , , ,

Today, the NVM Express Organization released version 1.0 of the NVM Express over Fabrics (NVMf) Standard. This effort was started on September 3rd, 2014 and through the efforts of many companies including Mellanox, has now been completed. Although the Standard was only completed today, at Mellanox, we have been doing proof of concepts and demonstrations of NVMf with numerous partners and early adaptor customers for more than a year.

NVMf allows the new high performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks. This is the first new built from the ground up networked storage technology to be developed in over 20 years. Coupled with new Ethernet and InfiniBand speeds which now top out at 100Gb/s, NVMf will not only dramatically improve the performance of existing storage network applications, but will also accelerate the adoption of many new and future computer technologies like Scale-out and Software Defined Storage, Hyperconverged Infrastructure, and Compute/Storage disaggregation.

Why Would I Want It? Because “Faster Storage Needs Faster Networks!”

The performance of storage technologies has increased 100 times in the last five years as Flash-based Solid State Disks (SSDs), and especially SSDs connected over NVMe, have come to market.

Fig-1_storage_mediaFigure 1: Newer storage is exponentially faster than older storage.

New Non-Volatile Memory (NVM) and Persistent Memory (PM) technologies are just around the corner that are again 100 times faster than today’s SSDs. Without faster network speeds and protocol technologies such as NVMf, these new SSDs and NVM technologies will have their performance locked up inside the server.

blog-image-a

Figure 2: As storage latencies decrease, protocol and network latencies become relatively more important and must also be reduced.

As the performance of the Storage gets faster, the network wire speed and protocol become the bottleneck. We can speed up the wires with the latest Ethernet and InfiniBand speeds, but new, more efficient protocols are also needed. Fortunately, NVMf can leverage RDMA (Remote Direct Memory Access), which allows NVMf to ride on top of the network.

Fig-31-efficient-data-movementFigure 3: RDMA allows direct, zero-copy and hardware-accelerated data transfers to server or storage memory, reducing network latencies and offloading the system CPU.

RDMA over InfiniBand and RoCE (RDMA over Converged Ethernet), allows data in memory to be transferred between computers and storage devices across a network with little or no CPU intervention. This is done with hardware transport offloads on network adapters that support RDMA.

How Fast Is It?

Of course, performance depends on many factors, the SSDs, the Initiator (Server) and the Target (Storage Device) architectures, and of course the network. Here are the results of one test conducted with a partner for a conference last year:

blog-image-b

Figure 4: Pre-standard NVMf demo with Mellanox 100GbE networking demonstrates extremely low fabric latencies compared to using the same NVMe SSDs locally.

The most interesting data is the added latency numbers. This is the difference in latency between testing the SSDs locally in the Target Server vs. testing the SSDs remotely across the network with NVMf. It should be noted that this was an early pre-standard version of NVMf and used highly optimized Initiator and Target systems tightly integrated to the SSDs, with dual 100GbE connections using Mellanox ConnectX-4 Ethernet adapters. But even doubling or tripling these numbers provides impressive performance un-attainable with existing storage networking technologies.

Mellanox Offers the Best Networking for NVMf

This new standard is doubly exciting to me because Mellanox is the clear leader in both RDMA and high-speed networking. We have >90 percent of the market share for both 40GbE adapters and InfiniBand adapters and were first to market with 25/50/100Gb/s Ethernet adapters and switches. The high performance of Mellanox networking solutions helps the faster NVMe SSDs and the more efficient NVMf protocol shine. In addition Mellanox just announced BlueField, a multi-core System on Chip that is ideal for controlling and connecting an NVMf flash shelf.

blog-image-cFigure 5: Mellanox BlueField includes high-speed networking, RDMA offloads, multiple CPU cores, many PCIe lanes, and DRAM, making it the ideal NVMe over Fabrics shelf controller.

Conclusion

I was lucky enough, or old enough, to have been around for and worked on the version 1.0 release of the Fibre Channel specification in 1994. So, I am not making it up when I say, “This is the first new built from the ground up networked storage technology to come along in over 20 years.” The excitement and interest level in the computer industry is at an even higher level now than it was back then. NVMf is the perfect technology to fill a gaping hole recently opened up between storage performance and storage network performance. At Mellanox, we have a suite of products to fill this hole that we have been developing and testing with partners over the last couple years.

Faster storage needs faster networks!

Resources:

 

 

About Rob Davis

Rob Davis is Vice President of Storage Technology at Mellanox Technologies and was formerly Vice President and Chief Technology Officer at QLogic. As a key evaluator and decision-maker, Davis takes responsibility for keeping Mellanox at the forefront of emerging technologies, products, and relevant markets. Prior to Mellanox, Mr. Davis spent over 25 years as a technology leader and visionary at Ancor Corporation and then at QLogic, which acquired Ancor in 2000. At Ancor Mr. Davis served as Vice President of Advanced Technology, Director of Technical Marketing, and Director of Engineering. At QLogic, Mr. Davis was responsible for keeping the company at the forefront of emerging technologies, products, and relevant markets. Davis’ in-depth expertise spans Virtualization, Ethernet, Fibre Channel, SCSI, iSCSI, InfiniBand, RoCE, SAS, PCI, SATA, and Flash Storage.

Comments are closed.