Tag Archives: RDMA

Michael Kagan, Office of the CTO

RDMA Enabling Storage Technology Revolution

With the explosion of data over the past few years, data storage has become a hot topic among corporate decision makers. It is no longer sufficient to have adequate space for the massive quantities of data that must be stored; it is just as critical that stored data be accessible without any bottlenecks that impede the ability to process and analyze data in real time.


Traditionally, accessing hard disk storage took tens of milliseconds, and the corresponding network and protocol overheads were in the hundreds of microseconds, a negligible percentage of the overall access time.


At that time, networks ran on 1Gb/s bandwidth, and SCSI was the protocol used for accessing storage locally while iSCSI based on TCP was developed for remote access.


However, once storage technology improved and Solid-State Disks (SSD) became the norm, access time dropped by two orders of magnitude to the hundreds of microseconds. Unless network and protocol access times decreased by a similar factor, they would create a bottleneck that negated the gains made by the new media technology.


This meant that the network had to handle larger bandwidths, such as 40Gb/s and now even 100Gb/s driving faster data transfers. For remote access, iSCSI is still the protocol of choice; however, TCP was no longer efficient enough, such that RDMA (RoCE) became the transport of choice for data plane operation and iSER was developed as an enhancement of iSCSI.

Continue reading

RoCE has Crossed the Chasm

In my previous post, I outlined how Gartner and The Register were predicting a gloomy outcome for Fibre Channel over Ethernet (FCoE) and made the assertion that in contrast RDMA over Converged Ethernet (RoCE) had quite a rosy future.  The key here is that RoCE has crossed the chasm from technology enthusiasts and early adopters to mainstream buyers.


In his eponymous book, Moore outlines that the main challenge of Crossing the Chasm is that the Early Majority are pragmatists interested in the quality, reliability, and business value of a technology. Whereas visionaries and enthusiasts relish new, disruptive technologies; the pragmatist values solutions that integrate smoothly into the existing infrastructure. Pragmatists prefer well established suppliers and seek references from other mature customers in their industry. And pragmatists look for technologies where there is a competitive multi-vendor eco-system that gives them flexibility, bargaining power, and leverage.

To summarize the three key requirements needed for a technology to cross the chasm are:

  1. Demonstration that the technology delivers clear business value
  2. Penetration of key beachhead in a mainstream market
  3. Multi-vendor, competitive ecosystem of suppliers


On all three fronts RoCE has crossed the chasm.

Continue reading

RoCE has Leaped the Canyon but FCoE Faces a Hellish Descent

I was talking with my colleague, Rob Davis, recently and he commented that “RoCE has leaped the canyon.” Now Rob is from Minnesota and they talk kind of funny there, but despite the rewording, I realized instantly what he meant. RoCE, of course refers to RDMA over Converged Ethernet technology and has “leaped the canyon” was a more emphatic way of saying has “crossed the chasm.”


This is, of course, the now proverbial CHASM:  the gap between early adopters and mainstream users made famous by the book, “Crossing the Chasm” by @GeoffreyAMoore. If you are serious about high-tech marketing and haven’t read this book, then you should cancel your afternoon meetings, download it onto your Kindle, and dive in! Moore’s Chasm along with Clayton Christianson’s Innovator’s Dilemma, and Michael Porter’s Competitive Strategy comprise the sacred trilogy for technology marketers.


Kevin Deierling 081015 Crossing the Chasm
Crossing the Chasm – Source: (http://yourstory.com/2014/09/druva-inc-techsparks-pune-crossing-the-chasm/)


Continue reading

Storage Spaces Direct: If Not RDMA, Then What? If Not Mellanox, Then Who?

Over the past couple years, we have witnessed significant architectural changes affecting modern data center storage systems. These changes have had a dramatic effect, as they have practically replaced traditional Storage Area Network (SAN), which has been the dominant solution for over a decade.


When analyzing the market trends that led to this change, it becomes very clear that virtualization is the main culprit. The SAN architecture was very efficient when only one workload was accessing the storage array, but it has become much less efficient in a virtualized environment in which different workloads arrive from different independent Virtual Machines (VMs).


To better understand this concept, let’s use a city’s traffic light system as an analogy to a data center’s data traffic. In this analogy, the cars are the data packets (coming in different sizes), and the traffic lights are the data switches. Before the city programs a traffic light’s control, it conducts a thorough study of the traffic patterns of that intersection and the surrounding area.


Continue reading

Double Your Storage System Efficiency

Enable Higher IOPS while Maximizing CPU Utilization

As virtualization is now a standard technology in the modern data center, IT managers are  now seeking ways to increase efficiency by adopting new architectures and technologies that enable faster data processing and execute more jobs over the same infrastructure, thereby lowering the cost per job. Since CPUs and storage systems are the two main contributors to infrastructure cost, using fewer CPU cycles and accelerating access to storage are keys toward achieving higher efficiency.


The ongoing demand to support mobility and real-time analytics of constantly increasing amounts of data demands that new architectures and technologies be used, specifically those with smarter usage of expensive CPU cycles and as a replacement of old storage systems that were very efficient in the past, but that have become hard to manage and extremely expensive to scale in modern virtualized environments.


With an average cost of $2,500 per CPU, about 50% of compute server cost is due to the CPUs.  On the other hand, the I/O controllers cost less than $100. Thus, offloading tasks from the CPU to the I/O controller frees expensive CPU cycles, increasing the overall server efficiency. Other expensive components, such as SSD, will therefore not need to wait the extra cycles for the CPU. This means that using advanced I/O controllers with offload engines results in a much more balanced system that increases the overall infrastructure efficiency.


Continue reading

How-old.net and the Driving Forces behind the Age of Machine Learning

I am on a business trip and had dinner with a few coworkers last night. During dinner, one of them proudly pulled out his smartphone and bragged about how young how-old.net thinks he is. Indeed, the age that how-old.net spat out was about 2/3 of his real age.


Of course, he had to take everyone’s picture and we had a good laugh about the results. Moreover, right before I started this business trip a couple days ago, I had multiple friends posting similar pictures online from how-old.net, it had gone viral! In case you haven’t tried, here is how it looks:




Continue reading

How to Achieve Higher Efficiency in Software Defined Networks (SDN) Deployments

During the last couple of years, the networking industry has invested a lot of effort into developing Software Defined Network (SDN) technology, which is drastically changing data center architecture and enabling large-scale clouds without significantly escalating the TCO (Total Cost of Ownership).


The secret of SDN is not that it enables control of data center traffic via software–it’s not like IT managers were using screwdrivers before to manage the network–but rather that it affords the ability to decouple the control path from the data path.  This represents a major shift from the traditional data center networking architecture and therefore offers agility and better economics in modern deployments.


For readers who not familiar with SDN, a simple example can demonstrate the efficiency that SDN provides:   Imagine a traffic light that makes its own decisions as to when to change and sends data to the other lamps.  Now imagine if that were changed to a centralized control system that takes a global view of the entire traffic pattern throughout the city and therefore makes smarter decisions on how to route the traffic.


The centralized control unit tells each of the lights what to do (using a standard protocol), reducing the complexity of the local units while increasing overall agility. For example, in an emergency, the system can reroute traffic and allow rescue vehicles faster access to the source of the issue.


 Tokyo Traffic Control Center;  Photo Courtesy of @CScoutJapan
Tokyo Traffic Control Center, @CScoutJapan

Continue reading

Establishing a High Performance Cloud with Mellanox CloudX

When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI).  NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.


NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions.  NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 1 of 3)

Transport Layer Innovation: RDMA

During my undergraduate days at UC Berkeley in the 1980’s, I remember climbing through the attic of Cory Hall running 10Mbit/sec coaxial cables to professors’ offices. Man, that 10base2 coax was fast!! Here we are in 2014 right on the verge of 100Gbit/sec networks. Four orders of magnitude increase in bandwidth is no small engineering feat, and achieving 100Gb/s network communications requires innovation at every level of the seven layer OSI model.

To tell you the truth, I never really understood the top three layers of this OSI model: I prefer the TCP/IP model which collapses all of them into a single “Application” layer which makes more sense. Unfortunately, it also collapses the Link layer and the Physical layer and I actually don’t think this makes sense to combine these two.  I like to build my own ‘hybrid’ model that collapses the top three layers into an Application layer but allows you to consider the Link and Physical layers separately.

Kevin D Blog 081814 Fig1

It turns out that a tremendous amount of innovation is required in these bottom four layers to achieve effective 100Gb/s communications networks. The application layer needs to change as well to fully take advantage of 100Gb/s networks.   For now we’ll focus on the bottom four layers. Continue reading

How RDMA Increases Virtualization Performance Without Compromising Efficiency

Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides.  This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.

One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services.  Important features like high availability (HA) or VM migration can’t be done easily.  Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.

As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.

Continue reading