Last week (on December 9th, 2013), Symantec announced the GA of their clustered file storage (CFS). The new solution enables customers to access mission critical data and applications 400% faster than traditional Storage Area Networks (SANs) at 60% of the cost.
Faster is cheaper! Sounds like magic! How they are doing it?
Try to understand the “magic”: It is important to understand the advantages that using SSD with high performance interconnect enable in the modern scale-out (or clustered) storage systems. Up to now, SAN-based storage has typically been used to increase performance and provide data availability for multiple applications and clustered systems. However, with the recent high-performance applications demand, SAN vendors are trying to add SSD into the storage array itself to provide higher bandwidth and lower latency response.
Since SSDs offer an incredibly high number of IOPS and bandwidth, it is important to use the right interconnect technology and to avoid bottlenecks associated with access to storage. Old fabric, like Fibre Channel (FC) cannot cope with faster pipe demands, as 8Gb/s (or even 16Gb/s) bandwidth performance is not good enough to satisfy the applications requirements. While 40Gb/s Ethernet may look like an alternative, InfiniBand (IB) currently supports up to 56Gb/s, with a roadmap to 100Gb/s in next year.
Both Ethernet and IB have a considerable advantage over FC. This is exactly what Symantec did, connecting the cluster over IB, getting much higher bandwidth, with much lower latency. However, there is more than just the interconnect performance boost. IB is a lossless fabric that uses Remote Direct Memory Access (RDMA) technology to move data. When using Mellanox gear, RDMA is fully offloaded to the IO controllers (in this case ConnectX-3), minimizing the CPU involvement. No need to copy the packet several times, no need to execute the TCP/IP stack, etc., so more CPU cycles can be used to run the application itself.
So sorry, no magic. Just making the right decisions.
Symantec’s CFS over IB product joined other clustered file systems like GPFS, Luster & GlusterFS and non-clustered File systems like Microsoft’ SMB Direct (SMB 3.0 over RDMA). It represents an ongoing trend to replace the old and expressive traditional SAN with scale-out architecture using SSD as a cache (in the nodes) using in-memory computing technology, which runs over RDMA and minimizes access to the slow HDD.
Author: Motti Beck is the Director of Marketing, Enterprise Data Center market segment at Mellanox Technologies, Inc. Before joining Mellanox, Motti was a founder of several setup companies including BindKey Technologies that was acquired by DuPont Photomask (today Toppan Printing Company LTD) and Butterfly Communications that was acquired by Texas Instruments. Prior to that, he was a Business Unit Director at National Semiconductors. Motti holds a B.Sc in computer engineering from the Technion – Israel Institute of Technology.