All posts by Kevin Deierling

About Kevin Deierling

Kevin Deierling has served as Mellanox's vice president of marketing since March 2013. Previously, he was chief architect at Silver Spring Networks from 2007 to 2012. From 2005 to 2007, he was vice president of marketing and business development at Spans Logic. From 1999 to 2005, Mr. Deierling was vice president of product marketing at Mellanox Technologies. Kevin has contributed to multiple technology standards through organization including the InfiniBand Trade Association and PCI Industrial Manufacturing Group. He has over 20 patents and was a contributing author of a text on BiCmos design. Kevin holds a BA in Solid State Physics from UC Berkeley. Follow Kevin on Twitter: @TechseerKD

RoCE has Crossed the Chasm

In my previous post, I outlined how Gartner and The Register were predicting a gloomy outcome for Fibre Channel over Ethernet (FCoE) and made the assertion that in contrast RDMA over Converged Ethernet (RoCE) had quite a rosy future.  The key here is that RoCE has crossed the chasm from technology enthusiasts and early adopters to mainstream buyers.


In his eponymous book, Moore outlines that the main challenge of Crossing the Chasm is that the Early Majority are pragmatists interested in the quality, reliability, and business value of a technology. Whereas visionaries and enthusiasts relish new, disruptive technologies; the pragmatist values solutions that integrate smoothly into the existing infrastructure. Pragmatists prefer well established suppliers and seek references from other mature customers in their industry. And pragmatists look for technologies where there is a competitive multi-vendor eco-system that gives them flexibility, bargaining power, and leverage.

To summarize the three key requirements needed for a technology to cross the chasm are:

  1. Demonstration that the technology delivers clear business value
  2. Penetration of key beachhead in a mainstream market
  3. Multi-vendor, competitive ecosystem of suppliers


On all three fronts RoCE has crossed the chasm.

Continue reading

RoCE has Leaped the Canyon but FCoE Faces a Hellish Descent

I was talking with my colleague, Rob Davis, recently and he commented that “RoCE has leaped the canyon.” Now Rob is from Minnesota and they talk kind of funny there, but despite the rewording, I realized instantly what he meant. RoCE, of course refers to RDMA over Converged Ethernet technology and has “leaped the canyon” was a more emphatic way of saying has “crossed the chasm.”


This is, of course, the now proverbial CHASM:  the gap between early adopters and mainstream users made famous by the book, “Crossing the Chasm” by @GeoffreyAMoore. If you are serious about high-tech marketing and haven’t read this book, then you should cancel your afternoon meetings, download it onto your Kindle, and dive in! Moore’s Chasm along with Clayton Christianson’s Innovator’s Dilemma, and Michael Porter’s Competitive Strategy comprise the sacred trilogy for technology marketers.


Kevin Deierling 081015 Crossing the Chasm
Crossing the Chasm – Source: (


Continue reading

Ethernet Just Got a Big Performance Boost with Release of Soft-RoCE

Data Center innovation just keeps getting faster and RoCE just gave a big boost to Ethernet! Today we announced the release of opensource Soft RoCE. Soft RoCE is a software implementation of RDMA over Converged Ethernet that allows RoCE to run on any Ethernet network adapter, whether or not it offers hardware acceleration.

Kevin D 062315 Fig 1


This Soft-RoCE announcement comes fast on the heels of our big launch last week at One World Observatory of our next generation of 25 & 100Gb Ethernet adapters and switches. As we announced last week, both the Mellanox 25/50/100 Gigabit Ethernet Spectrum switch and the ConnectX-4 Lx 25/50 Gigabit Ethernet adapter both fully support RoCE in hardware. As such, they offer the highest performance and most cost-effective RDMA over Ethernet solutions in the market.



So why would we want to enable the market with a software implementation of RoCE that runs on other Ethernet adapters that don’t have hardware acceleration for RDMA? Well because we think once customers try RDMA and see the benefits they will decide to deploy RoCE it in their data centers – and we want to make this as easy as possible with Soft RoCE. That way customers can run a proof of concept evaluation using existing client machines that have any Ethernet NIC and connect to a RoCE enabled server or storage system. RoCE enabled systems are available from Dell, Data-On HP, IBM, Iron Systems, Lenovo, SuperMicro, Zadara and others.

Continue reading

IBM Enterprise2014 Executive Summit: Turbo LAMP

This week at the IBM Enterprise2014 Executive Summit in Las Vegas, IBM unveiled new Power8 based infrastructure for cloud, data, web 2.0, and mobile engagement. Mellanox is being showcased as a key partner enabling critical platforms for IBM’s Big Data analytics, cloud, and software defined “Elastic Storage” solutions. The new Power8 platform incorporates Mellanox 40Gb Ethernet networking gear and a fully integrated Turbo LAMP (Linux, Apache, MySQL, PHP) software stack.




This Turbo LAMP stack came about through a development partnership between IBM, Mellanox and several software vendors:

  • Canonical (Ubuntu Linux & Apache Web Server)
  • SkySQL (MariaDB/MySQL Database)
  • Zend (PHP)

The Turbo LAMP integration is important as it is the foundation for the most common ecommerce, content management systems, and Big Data analytics platforms. This integration allows customers to deliver optimized mobile and web applications while offering critical performance, scale and secure access that businesses need.

On Thursday October 9, our very own Matthew Sheard will be on stage at IBM Enterprise Conference providing details on the solution as outlined in this presentation.

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 3 of 3)

Physical Layer Innovation: Silicon Photonics

So in two previous posts, I discussed the innovations required at the transport, network, and link layer of the communications protocol stack to take advantage of 100Gb/s networks . Let’s now talk about the physical layer. A 100Gb/sec signaling rate implies a 10ps symbol period.

Frankly, this is just not possible on a commercial basis with current technology. Neither is it possible on copper nor on optical interfaces. At this rate the electrical and optical pules just can’t travel any useful distance without smearing into each other and getting corrupted.

So there are two possible solutions.  The first is to use 4 parallel connections each running @25Gb/sec. The second is to use a single channel with a 25Gb/sec symbol rate but to send four bits per symbol. This can be done either through techniques like Pulse Amplitude Modulation (PAM4) or optically by sending four different colors of light on a single fiber using Wavelength Division Multiplexing (WDM) techniques. Continue reading

Road to 100Gb/sec…Innovation Required! (Part 2 of 3)

Network and Link Layer Innovation: Lossless Networks

In a previous post, I discussed that innovations are required to take advantage of 100Gb/s at every layer of the communications protocol stack networks – starting off with the need for  RDMA at the transport layer. So now let’s look at the requirements at the next two layers of the protocol stack. It turns out that RDMA transport requires innovation at the Network and Link layers in order to provide a lossless infrastructure.

‘Lossless’ in this context does not mean that the network can never lose a packet, as some level of noise and data corruption is unavoidable. Rather by ‘lossless’ we mean a network that is designed such that it avoids intentional, systematic packet loss as a means of signaling congestion. That is packet loss is the exception rather than the rule.

Priority Flow Control is similar to a traffic light and enables lossless networks
Priority Flow Control is similar to a traffic light and enables lossless networks

Lossless networks  can be achieved by using priority flow control at the link layer which allows packets to be forwarded only if there is buffer space available in the receiving device. In this way buffer overflow and packet loss is avoided and the network becomes lossless.

In the Ethernet world, this is standardized as 802.1 QBB Priority Flow Control (PFC) and is equivalent to putting stop lights at each intersection. A packet on a given priority class can only be forwarded when the light is green.

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 1 of 3)

Transport Layer Innovation: RDMA

During my undergraduate days at UC Berkeley in the 1980’s, I remember climbing through the attic of Cory Hall running 10Mbit/sec coaxial cables to professors’ offices. Man, that 10base2 coax was fast!! Here we are in 2014 right on the verge of 100Gbit/sec networks. Four orders of magnitude increase in bandwidth is no small engineering feat, and achieving 100Gb/s network communications requires innovation at every level of the seven layer OSI model.

To tell you the truth, I never really understood the top three layers of this OSI model: I prefer the TCP/IP model which collapses all of them into a single “Application” layer which makes more sense. Unfortunately, it also collapses the Link layer and the Physical layer and I actually don’t think this makes sense to combine these two.  I like to build my own ‘hybrid’ model that collapses the top three layers into an Application layer but allows you to consider the Link and Physical layers separately.

Kevin D Blog 081814 Fig1

It turns out that a tremendous amount of innovation is required in these bottom four layers to achieve effective 100Gb/s communications networks. The application layer needs to change as well to fully take advantage of 100Gb/s networks.   For now we’ll focus on the bottom four layers. Continue reading