All posts by Rob Davis

About Rob Davis

Rob Davis is Vice President of Storage Technology at Mellanox Technologies and was formerly Vice President and Chief Technology Officer at QLogic. As a key evaluator and decision-maker, Davis takes responsibility for keeping Mellanox at the forefront of emerging technologies, products, and relevant markets. Prior to Mellanox, Mr. Davis spent over 25 years as a technology leader and visionary at Ancor Corporation and then at QLogic, which acquired Ancor in 2000. At Ancor Mr. Davis served as Vice President of Advanced Technology, Director of Technical Marketing, and Director of Engineering. At QLogic, Mr. Davis was responsible for keeping the company at the forefront of emerging technologies, products, and relevant markets. Davis’ in-depth expertise spans Virtualization, Ethernet, Fibre Channel, SCSI, iSCSI, InfiniBand, RoCE, SAS, PCI, SATA, and Flash Storage.

NVMe over Fabrics Adoption Makes In-Roads with NetApp E-series

Another milestone in the life of NVM Express over Fabrics (NVMe-oF) technology has occurred. NetApp has become the first large enterprise external storage system vendor to support NVMe-oF. Late last month they announced their new E-5700 hybrid storage array and EF-570 all-flash array. Both feature front side NVMe-oF connections to clients. Only four months have passed since I wrote about the last NVMe-oF milestone (first UNH NVMe-oF interoperability plugfest), a successful multi-vendor interoperability test at the University of New Hampshire InterOperability Laboratory (UNH-IOL). It is not unusual for a new technology’s milestones to cluster closer together as its evolution to mainstream nears completion and accelerates. Large enterprise storage vendor support is extremely important for a new technology because it signals to main stream data centers that it is becoming mature and less risky to try out and deploy. It also stokes the competitive fires with the other enterprise storage vendors, creating a domino effect of initial support, then expanding features, and innovation.

NVMe-oF Technology

For a refresher on NVMe-oF, my colleague John Kim, has authored several blogs detailing how the technology enables the new high performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks. A YouTube video description is also available.

I attended NetApp’s Insight conference a few weeks ago and got a firsthand look at the E-5700 and EF-570.

Figure 1: The NetApp EF-570 all-flash array now supports NVMe-oF

On the conference show floor, NetApp had an impressive demo of the EF-570’s NVMe-oF capabilities, showing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency.

Several startups and Original Design Manufacturers (ODMs) are already shipping NVMe-oF solutions with impressive performance using a Just-a-Bunch-of-Flash (JBOF) or hyperconverged (Server SAN) configuration. And some enterprise storage vendors, such as Pure, have NVMe-oF now at the “back end” of their storage arrays, connecting storage controllers to shelves of NVMe SSDs. But, until now, none of the major array vendors offered generally available NVMe-oF connectivity to client/servers.

Figure 2: NetApp EF-570/E-5700 can connect to Linux servers at 100Gb/s using Mellanox InfiniBand adapters and switches.

 

Conclusion

With the completion of another millstone for NVMe-oF, the technology seems to now be accelerating to mainstream success. A widely supported open industry standard, tested for multi-vendor interoperability, supported by mainstream enterprise storage vendors, with performance to match the new Non-volatile memory storage technologies. What’s not to like?

Resources:

Make Your Vote for OpenStack Australia for Flash Storage with Open Source RDMA, NVMe Over Fabrics, and Ceph Performance

Voting Just Opened for OpenStack Australia

The 2017 Openstack Summit just opened up voting for presentations to be given Nov. 6-8, Sydney, Australia. Mellanox has a long history of supporting OpenStack with technology and product solutions. Mellanox has submitted a number of technical papers and would like to urge readers to vote, vote, vote! The OpenStack Foundation receives more than 1,500 submissions and of these, they only select 25-35 percent for participation so your votes count.

In this blog, we will cover the topic and content of three more of our submissions. There is only one short week to vote so please review the content and vote for your favorites. Voting begins July 25, 2017 and ends August 1, 2017.

 

First paper topic: Accelerating Flash Storage with Open Source RDMA

Overview: File and Object storage is already able to take advantage of current NAND flash to obtain noticeably better performance. However, much more performance is possible as RDMA- based storage technology originally developed for the HPC industry moves to the main stream. And, this is all possible with open source solutions. By enhancing a storage system’s network stack with open source RDMA, users can see an even more dramatic improvement than by just adding flash to their storage system. RDMA technology increases the performance of the entire storage system allowing File, Block and Object-based applications to take better advantage of the much higher performance solid state storage. With even faster Persistent Memory on the way, RDMA is even more important to eliminate the network stack bottleneck. This presentation will explain how this technology utilizes ultra-low latency network interfaces to achieve extraordinary storage performance.

Vote here: Accelerating Flash Storage with Open Source RDMA

 

Second paper topic: NVMe Over Fabrics – High Performance Flash Moves to Ethernet

Overview: There is a new, very high performance SSD interface called NVMe over Fabrics now available that expands the capabilities of networked storage solutions. It is an extension of the local NVMe SSD interface developed a few years ago, driven by the need for a faster interface for SSDs. Similar to the way parallel SCSI was networked with Fibre Channel almost 20 years ago, this technology enables NVMe SSDs to be networked and shared. This open source software utilizes ultra-low latency RDMA technology to achieve data sharing across a network without sacrificing the local performance characteristics of NVMe SSDs.

Vote here: NVMe Over Fabrics – High Performance Flash Moves to Ethernet

 

Third paper topic: Accelerating Ceph Performance with high speed networks and protocols

 

Overview: High performance networks able to reach 100Gb/s, along with advanced protocols like RDMA, are making Ceph a main stream enterprise storage contender. Ceph has gained major traction for low-end applications but, with a little extra focus on the network, it can easily compete with the big enterprise storage players. Based on technologies originally developed for the High Performance Compute (HPC) industry, very fast networks and Remote Direct Memory Access (RDMA) protocol is now moving to the Enterprise. Examples include Microsoft’s SMB Direct product and NVMe over Fabrics. Microsoft uses SMB Direct with RDMA protocol and high performance Ethernet to give a performance boost to their scale out storage product. This presentation will explain how to enable Ceph to utilize these same technology and dramatically improve its performance while expanding its application reach.

Vote here: Accelerating Ceph Performance with high speed networks and protocols

 

Supporting Resources:

NVMe Over Fabrics Passes First Multi-Vendor Interoperability Test

Earlier this week, at the University of New Hampshire (UNH) in Durham, another milestone in the life of NVM Express over Fabrics (NVMe-oF) technology occurred. A successful multi-vendor interoperability test! Last year, I wrote about the release of version 1.0 of the open industry standard for NVMe-oF by NVMe.org; it was a defining milestone for NVMe-oF. The year before, I wrote about the first public demonstration of NVMe-oF at the National Association of Broadcasters (NAB) show. Both were important achievements for this exciting new and extremely high performance networked storage technology. But, for a technology to be really successful in the world of high tech, it must not only be an open industry standard and available from multiple vendors, but those vendors must make it interchangeable. Buyers must be able to mix and match vendor offerings based on price, features, availability, etc. Yesterday that criteria was met. Like its older brother NVMe, NVMe-oF was tested by UNH and proven to successfully interoperate.

University of New Hampshire InterOperability Laboratory’s (UNH-IOL)

The mission at the UNH-IOL is to provide a neutral environment to enable multi-vendor interoperability, conformance to standards, and with those, the improvement of data networking. UNH students do the testing which they have been doing this since 1988. This offers the nice side benefit of turning out students with hands-on experience in IT technologies. Over the years, they have grown into one of the industry’s most well know independent proving grounds for new technologies. I was first involved with the UNH-IOL back in the 90’s when they hosted the first Interoperability Lab for Fibre Channel. It seems fitting that more than 20 years later, they are hosting the first test for NVMe-oF, a technology every bit as revolutionary as Fibre Channel was back then.

NVMe-oF Technology

Recall that my colleague, John Kim, authored earlier blogs detailing how NVMe-oF technology enables the new high performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks. Add new Ethernet and InfiniBand speeds, which now top out at 100Gb/s (soon 200/400Gb/s), and NVMe-oF will dramatically improve the performance of existing storage network applications. It will also accelerate the adoption of many new and future storage architectures.

The Test Setup

The test was organized by the UNH-IOL to coincide with the regularly scheduled bi-yearly NVMe testing to leverage the SSD expertize already on site. The high level test plan, agreed upon ahead of time, called for participating vendors to mix and match their NICs in both Target and Initiator positions of the topology.

The open source NVMe-oF Linux driver software was used on both the Target and Initiator platforms. The NICs used the RoCE (RDMA over Converged Ethernet) protocol to provide the RDMA capability required by the NVMe-oF driver software. All NIC vendors were aggressively encouraged to attend by UNH-IOL. BTW, at the first Fibre Channel UNH-IOL event, there were just a handful of startups, Ancor (mine), Gadzooks (great name), and perhaps a couple others. The NVMe-oF testing this week was completely successful with our ConnectX®-4 adapter and other vendors’ NICs being totally interoperable as ether the NVMe-oF Target or Initiator. Near line rate performance for these adapters of 25Gb/s was also achieved.

Conclusions

With another milestone for NVMe-oF ticked off the list, the technology seems fast tracked to success. It has all the elements: cutting edge performance at the exact moment it is needed to network new storage technologies whose performance is 100 times that of hard drives and only getting faster. An open industry standard, widely supported and contributed to by both storage and networking companies big and small. And now, multi-vendor interoperability from leading suppliers of the Ethernet NICs it runs on. BTW, at the first Fibre Channel test at UNH we could barely get the adapters and switches to link up at Layer 2. And look how Fibre Channel turned out

Resources:

 

NVMe Over Fabrics Standard is Released

Today, the NVM Express Organization released version 1.0 of the NVM Express over Fabrics (NVMf) Standard. This effort was started on September 3rd, 2014 and through the efforts of many companies including Mellanox, has now been completed. Although the Standard was only completed today, at Mellanox, we have been doing proof of concepts and demonstrations of NVMf with numerous partners and early adaptor customers for more than a year.

NVMf allows the new high performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks. This is the first new built from the ground up networked storage technology to be developed in over 20 years. Coupled with new Ethernet and InfiniBand speeds which now top out at 100Gb/s, NVMf will not only dramatically improve the performance of existing storage network applications, but will also accelerate the adoption of many new and future computer technologies like Scale-out and Software Defined Storage, Hyperconverged Infrastructure, and Compute/Storage disaggregation.

Why Would I Want It? Because “Faster Storage Needs Faster Networks!”

The performance of storage technologies has increased 100 times in the last five years as Flash-based Solid State Disks (SSDs), and especially SSDs connected over NVMe, have come to market.

Fig-1_storage_mediaFigure 1: Newer storage is exponentially faster than older storage.

New Non-Volatile Memory (NVM) and Persistent Memory (PM) technologies are just around the corner that are again 100 times faster than today’s SSDs. Without faster network speeds and protocol technologies such as NVMf, these new SSDs and NVM technologies will have their performance locked up inside the server.

blog-image-a

Figure 2: As storage latencies decrease, protocol and network latencies become relatively more important and must also be reduced.

As the performance of the Storage gets faster, the network wire speed and protocol become the bottleneck. We can speed up the wires with the latest Ethernet and InfiniBand speeds, but new, more efficient protocols are also needed. Fortunately, NVMf can leverage RDMA (Remote Direct Memory Access), which allows NVMf to ride on top of the network.

Fig-31-efficient-data-movementFigure 3: RDMA allows direct, zero-copy and hardware-accelerated data transfers to server or storage memory, reducing network latencies and offloading the system CPU.

RDMA over InfiniBand and RoCE (RDMA over Converged Ethernet), allows data in memory to be transferred between computers and storage devices across a network with little or no CPU intervention. This is done with hardware transport offloads on network adapters that support RDMA.

How Fast Is It?

Of course, performance depends on many factors, the SSDs, the Initiator (Server) and the Target (Storage Device) architectures, and of course the network. Here are the results of one test conducted with a partner for a conference last year:

blog-image-b

Figure 4: Pre-standard NVMf demo with Mellanox 100GbE networking demonstrates extremely low fabric latencies compared to using the same NVMe SSDs locally.

The most interesting data is the added latency numbers. This is the difference in latency between testing the SSDs locally in the Target Server vs. testing the SSDs remotely across the network with NVMf. It should be noted that this was an early pre-standard version of NVMf and used highly optimized Initiator and Target systems tightly integrated to the SSDs, with dual 100GbE connections using Mellanox ConnectX-4 Ethernet adapters. But even doubling or tripling these numbers provides impressive performance un-attainable with existing storage networking technologies.

Mellanox Offers the Best Networking for NVMf

This new standard is doubly exciting to me because Mellanox is the clear leader in both RDMA and high-speed networking. We have >90 percent of the market share for both 40GbE adapters and InfiniBand adapters and were first to market with 25/50/100Gb/s Ethernet adapters and switches. The high performance of Mellanox networking solutions helps the faster NVMe SSDs and the more efficient NVMf protocol shine. In addition Mellanox just announced BlueField, a multi-core System on Chip that is ideal for controlling and connecting an NVMf flash shelf.

blog-image-cFigure 5: Mellanox BlueField includes high-speed networking, RDMA offloads, multiple CPU cores, many PCIe lanes, and DRAM, making it the ideal NVMe over Fabrics shelf controller.

Conclusion

I was lucky enough, or old enough, to have been around for and worked on the version 1.0 release of the Fibre Channel specification in 1994. So, I am not making it up when I say, “This is the first new built from the ground up networked storage technology to come along in over 20 years.” The excitement and interest level in the computer industry is at an even higher level now than it was back then. NVMf is the perfect technology to fill a gaping hole recently opened up between storage performance and storage network performance. At Mellanox, we have a suite of products to fill this hole that we have been developing and testing with partners over the last couple years.

Faster storage needs faster networks!

Resources:

 

 

Mangstor & Mellanox Show NVMe Over Fabrics Solution To Reduce Latency Tax

This week the National Association of Broadcasters (NAB) show is going full swing in Las Vegas and Ethernet Technology Summit (ETS) is running in Santa Clara, California. Today in the United States also happens to be Tax Day, when you must file your return and pay any extra taxes owed to the US Government.  That makes it a great time to show a new solution that aims to eliminate latency “taxes” from flash storage—it’s called NVMe Over Fabrics.

What Is NVMe and Why Would I Want It?

First a brief history of NVMe (Non-Volatile Memory Express): Traditionally flash storage is connected by SAS or SATA disk interfaces or a PCIe slot with proprietary drivers. SAS and SATA are proven solutions but they—and the included SCSI protocol layer–are designed for spinning disk, not flash. NVMe standardizes a flash-optimized command set to access to flash devices over a PCIe bus, eliminating the SCSI latency tax. NVMe devices are shipping now with native drivers for Linux, Windows, and VMware.

Rob Davis availability of NVMe drivers

Figure 1: Availability of NVMe drivers

Continue reading