Top 7 Reasons Why Fibre Channel Is Doomed

 
Storage, , , , , ,

Analyst firm, Neuralytix, just published a terrific white paper about the revolution affecting data storage interconnects.  Titled Faster Interconnects for Next Generation Data Centers, it explains why customers are rethinking their data center storage and networks, in particular around how iSCSI and iSER (iSCSI with RDMA) are starting to replace Fibre Channel for block storage.

John Kim 121415 Pic1

You can find the paper here. It’s on-target about iSCSI vs. FC, but it doesn’t cover the full spectrum of factors dooming FC to a long and slow fadeout from the storage connectivity market. I’ll summarize the key points of the paper as well as the other reasons Fibre Channel has no future.

 

Three reasons Fibre Channel is a Dead End, As Explained by Neuralytix:

1.  Flash: Fast Storage Needs Fast Networking

Today’s flash far outperforms hard drives for throughput, latency, IOPS, power consumption, and reliability. It has better price/performance than hard disks and already represents between 10-15% of shipping enterprise storage capacity according to analysts. With fast storage, your physical network and your network protocol must have high bandwidth and low latency, otherwise you’re wasting much of the value of flash. Tomorrow’s NVMe devices will support up to 2-3GB/s (16-24Gb/s) each with latencies <50 us (that’s <0.05 milliseconds vs. 2-5 milliseconds for hard drives).  Modern Ethernet supports speeds of 100Gb/s per link, with latencies of several microseconds, and combined with the hardware-accelerated iSER block protocol, it’s perfect for supporting maximum performance on non-volatile memory (NVM), whether today’s flash or tomorrow’s next-gen solid state storage.

 

John Kim 121415 Pic2

Figure 1: Storage Media Gets Much Faster

 

2. Modern Ethernet Outperforms Fibre Channel

The “old” Ethernet network ran at 1Gb/s or 10Gb/s speeds and relied on TCP to deliver data, which was reliable but somewhat unpredictable. But today’s Ethernet runs at 25, 40, 50, or 100Gb/s speeds, is lossless, and no longer dependent on TCP alone. It supports RDMA connections which lower latency and frees up CPU cycles to run applications (or storage features). Ethernet easily supports multiple storage protocols—block, file, object, etc.—simultaneously, and allows client, server, and storage traffic to share the same network, using with traffic prioritization and QoS.

 

Meanwhile, Fibre Channel is still transitioning to 16Gb/s technology and thinking about 32Gb/s in 2016, which is slower than what Ethernet was supporting 3 years ago. Even once 32Gb/s arrives, FC will still support only block storage traffic. Other storage (and other non-storage) traffic will require an Ethernet network anyway.

John Kim 121415 Pic3

Figure 2: Ethernet and InfiniBand Speeds Far Outpace Fibre Channel Speeds

 

3. iSER Turbocharges iSCSI

Once, a long time ago in a data center far, far away, iSCSI was considered the “poor man’s Fibre Channel.” Now iSCSI supports faster speeds than FC. In addition, iSER adds RDMA support to iSCSI. This lets the network cards offload the iSCSI protocol and network transport to the NIC, supporting even higher performance and lower latency with low CPU utilization. It works with any application that can use block storage and leverages the management and security tools already in iSCSI.

Storage Network Cost Performance
iSCSI TCP on 1/10GbE Low Slow to Medium
Fibre Channel High Fast
iSER on 10-100 GbE Low to Medium Very Fast

 

 

For details on these three, read the Neuralytix paper. To know the rest of the story, read on.

 

Four More Reasons Fibre Channel Is Doomed, As Explained By the Market:

4.  Growth of File and Object Storage

Analyst IDC predicts file and object storage are growing at 24% per year, much faster than block storage. This is because file and object are better-suited to the typical storage from the new world of mobile, social, Internet-of-Things, and cloud. File storage includes NFS, CIFS/SMB (including SMB Direct), and scale-out file systems like Gluster, Lustre, and Spectrum Scale (née GPFS). Object storage is stored and retrieved with a key value API such as S3 or Swift. File and Object Storage can run on InfiniBand or Ethernet but not on Fibre Channel–nearly all the deployments use Ethernet.

 

5.  Cloud Storage Is Software-Defined, Scale-Out and Not Fibre Channel

Everyone knows data is moving to the cloud and taking storage with it—Goldman Sachs predicts a 33% annual growth rate. Cloud uses block storage but also file, object, and distributed scale-out storage, which is usually scale-out and software-defined. Since public and private cloud customers want high performance, low cost, flexibility, and rapid deployment, there is zero reason to deploy separate and expensive FC networks just for the block storage traffic. Instead, the vast majority of Cloud deployments use Ethernet storage and those who want the fastest network and best price-performance deploy InfiniBand—for example, IaaS provider Profitbricks.

 

6.  Hyper-Converged Infrastructure (HCI) Is Showing Hyper-Growth

The hottest area of IT infrastructure today is hyper-converged. It’s growing faster than cloud, possibly at 100% per year, because it simplifies rapid deployment of virtualization and business applications. Major vendors like VMware, Microsoft, Nutanix, Dell, Lenovo and EMC are supporting it, along with a raft of startups (Pivot3, Maxta, etc.). In fact, it enables enterprises to make their infrastructure more cloud-like. Because it combines compute and storage into a single scale-out layer of servers, all storage simultaneously local (to each server) and shared across a cluster network. The network must carry both compute and storage traffic and be fast, efficient, and quick to deploy—in other words pretty much always Ethernet and never Fibre Channel. Analysts say it’s the fastest growing area of IT, and Fibre Channel has no play here.

John Kim 121415 Pic4

Figure 3: Nutanix dominates the hyper-converged infrastructure market

 

7.  Big video, Big Data, and Big Analytics Demand More From The Network

Finally, there is this catch-all category of big data, big files, and big clusters of compute to process or analyze the data. This is driven by multiple industries including media and entertainment, oil and gas exploration, semiconductor design, automotive simulations, pharmaceutical research, finance, and retail. On the data side, files can be multiple terabytes and datasets can exceed a petabyte. On the analysis side, clusters can reach hundreds or thousands of nodes, processing millions of operations per second. Storage can be Hadoop, NoSQL databases, clustered file systems (Gluster, Lustre, Spectrum Scale), or object storage like Ceph, but as these clusters grow and use more flash, 10GbE is too slow, and even 16Gb FC (which isn’t even supported for many of the architectures) is too slow. Each of these topics could be a separate blog, but again Fibre Channel is too slow, too expensive, and too inflexible to play in any of these growing markets.

 

Remember the context for these is that overall IT spending is only growing 5% per year, enterprise storage is growing 3% per year, and traditional block storage array revenues—where Fibre Channel dominates—are shrinking. The key takeaway is that storage growth is all where Fibre Channel cannot play. The only growth storage category that’s using Fibre Channel are all-flash arrays, but that’s only when they replace other Fibre Channel storage. In fact, we see all-flash arrays are being connected more and more often with other storage protocols like iSCSI, iSER, and SMB Direct, that use Ethernet or InfiniBand.

 

The Fibre Channel vendors may be showing somewhat steady revenues, but they’re only achieving this by busy milking existing customers for expensive 16Gb upgrades and—ironically—selling more Ethernet products.  In the meantime, their FC game is being played in a shrinking pond, far away from all the storage growth.

 

John Kim 121415 Pic5

Figure 4: Wikibon predicts decline of Traditional storage, growth of Server-SAN (hyper-converged)

 

RESOURCES:

About John F. Kim

John Kim is Director of Storage Marketing at Mellanox Technologies, where he helps storage customers and vendors benefit from high performance interconnects and RDMA (Remote Direct Memory Access). After starting his high tech career in an IT helpdesk, John worked in enterprise software and networked storage, with many years of solution marketing, product management, and alliances at enterprise software companies, followed by 12 years working at NetApp and EMC. Follow him on Twitter: @Tier1Storage

Comments are closed.