NVMe Over Fabrics Passes First Multi-Vendor Interoperability Test

 
NVMe Over Fabrics

Earlier this week, at the University of New Hampshire (UNH) in Durham, another milestone in the life of NVM Express over Fabrics (NVMe-oF) technology occurred. A successful multi-vendor interoperability test! Last year, I wrote about the release of version 1.0 of the open industry standard for NVMe-oF by NVMe.org; it was a defining milestone for NVMe-oF. The year before, I wrote about the first public demonstration of NVMe-oF at the National Association of Broadcasters (NAB) show. Both were important achievements for this exciting new and extremely high performance networked storage technology. But, for a technology to be really successful in the world of high tech, it must not only be an open industry standard and available from multiple vendors, but those vendors must make it interchangeable. Buyers must be able to mix and match vendor offerings based on price, features, availability, etc. Yesterday that criteria was met. Like its older brother NVMe, NVMe-oF was tested by UNH and proven to successfully interoperate.

University of New Hampshire InterOperability Laboratory’s (UNH-IOL)

The mission at the UNH-IOL is to provide a neutral environment to enable multi-vendor interoperability, conformance to standards, and with those, the improvement of data networking. UNH students do the testing which they have been doing this since 1988. This offers the nice side benefit of turning out students with hands-on experience in IT technologies. Over the years, they have grown into one of the industry’s most well know independent proving grounds for new technologies. I was first involved with the UNH-IOL back in the 90’s when they hosted the first Interoperability Lab for Fibre Channel. It seems fitting that more than 20 years later, they are hosting the first test for NVMe-oF, a technology every bit as revolutionary as Fibre Channel was back then.

NVMe-oF Technology

Recall that my colleague, John Kim, authored earlier blogs detailing how NVMe-oF technology enables the new high performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks. Add new Ethernet and InfiniBand speeds, which now top out at 100Gb/s (soon 200/400Gb/s), and NVMe-oF will dramatically improve the performance of existing storage network applications. It will also accelerate the adoption of many new and future storage architectures.

The Test Setup

The test was organized by the UNH-IOL to coincide with the regularly scheduled bi-yearly NVMe testing to leverage the SSD expertize already on site. The high level test plan, agreed upon ahead of time, called for participating vendors to mix and match their NICs in both Target and Initiator positions of the topology.

The open source NVMe-oF Linux driver software was used on both the Target and Initiator platforms. The NICs used the RoCE (RDMA over Converged Ethernet) protocol to provide the RDMA capability required by the NVMe-oF driver software. All NIC vendors were aggressively encouraged to attend by UNH-IOL. BTW, at the first Fibre Channel UNH-IOL event, there were just a handful of startups, Ancor (mine), Gadzooks (great name), and perhaps a couple others. The NVMe-oF testing this week was completely successful with our ConnectX®-4 adapter and other vendors’ NICs being totally interoperable as ether the NVMe-oF Target or Initiator. Near line rate performance for these adapters of 25Gb/s was also achieved.

Conclusions

With another milestone for NVMe-oF ticked off the list, the technology seems fast tracked to success. It has all the elements: cutting edge performance at the exact moment it is needed to network new storage technologies whose performance is 100 times that of hard drives and only getting faster. An open industry standard, widely supported and contributed to by both storage and networking companies big and small. And now, multi-vendor interoperability from leading suppliers of the Ethernet NICs it runs on. BTW, at the first Fibre Channel test at UNH we could barely get the adapters and switches to link up at Layer 2. And look how Fibre Channel turned out

Resources:

 

About Rob Davis

Rob Davis is Vice President of Storage Technology at Mellanox Technologies and was formerly Vice President and Chief Technology Officer at QLogic. As a key evaluator and decision-maker, Davis takes responsibility for keeping Mellanox at the forefront of emerging technologies, products, and relevant markets. Prior to Mellanox, Mr. Davis spent over 25 years as a technology leader and visionary at Ancor Corporation and then at QLogic, which acquired Ancor in 2000. At Ancor Mr. Davis served as Vice President of Advanced Technology, Director of Technical Marketing, and Director of Engineering. At QLogic, Mr. Davis was responsible for keeping the company at the forefront of emerging technologies, products, and relevant markets. Davis’ in-depth expertise spans Virtualization, Ethernet, Fibre Channel, SCSI, iSCSI, InfiniBand, RoCE, SAS, PCI, SATA, and Flash Storage.

Comments are closed.