Ceph Is A Hot Storage Solution – But Why?

 
Storage,

In talks with customers, server vendors, the IT press, and even within Mellanox, one of the hottest storage topics is Ceph. You’ve probably heard of it and many big customers are implementing it or evaluating it. But I am also frequently asked the following:

  • What is Ceph?
  • Why is it a hot topic in storage?
  • Why does Mellanox, a networking company, care about Ceph, and why should Ceph customers care about networking?

I’ll answer #1 and #2 in this blog and #3 in another blog.

 

OLYMPUS DIGITAL CAMERA

Figure 1: A bigfin reef squid (Sepioteuthis lessoniana) of the Class Cephalopoda

 

Ceph: What It Is

Ceph is open source, software-defined storage maintained by Red Hat since their acquisition of InkTank in April 2014. It’s capable of block, object, and file storage, though only block and object are currently deployed in production.  It is scale-out, meaning multiple Ceph storage nodes (servers) cooperate to present a single storage system that easily handles many petabytes (1PB = 1,000 TB = 1,000,000 GB) and increase both performance and capacity at the same time. Ceph has many basic enterprise storage features including replication (or erasure coding), snapshots, thin provisioning, tiering (ability to shift data between flash and hard drives), and self-healing capabilities.

 

Why Ceph is HOT

In many ways Ceph is a unique animal—it’s the only storage solution that deliver four  critical  capabilities:

  • open-source
  • software-defined
  • enterprise-class
  • unified storage (object, block, file).

Many other storage products are open source or scale out or software-defined or unified or have enterprise features, and some let you pick 2 out of 3, but almost nothing else offers all four together.

  • Open source means lower cost
  • Software-defined means deployment flexibility, faster hardware upgrades, and lower cost
  • Scale-out means it’s less expensive to build large systems and easier to manage them
  • Block + Object means more flexibility (most other storage products are block only, file only, object only, or file+block; block+object is very rare)
  • Enterprise features mean a reasonable amount of efficiency and data protection

Ceph includes many basic enterprise storage features including: replication (or erasure coding), snapshots, thin provisioning, auto-tiering (ability to shift data between flash and hard drives), self-healing capabilities

 

Feature Means Final Benefit
Open Source No license fees Lower cost
Software-defined Different hardware for different workloads Broader use cases, higher efficiency
Use commodity hardware Lower cost, easier to evaluate
Scale-out Manage many nodes as one system Easier to manage = lower operational cost
Distributed capacity Multi-PB capacity for object & cloud
Distributed performance Good performance from low cost servers
Block + Object Store more types of data Broader use cases
Enterprise features Data protection Don’t lose valuable data
Self-healing Higher availability, easier management
Data efficiency Lower cost
Caching/tiering Higher performance at lower cost

John Kim 062215 venn diagram

Figure 2: Venn diagram showing Ceph might be unique

 

Despite all that Ceph has to offer there are still two camps: those that love it and those that dismiss it.

 

I Love Ceph!
The nature of Ceph means some of the storage world loves it, or at least has very high hopes for it. Generally server vendors love Ceph because it lets them sell servers as enterprise storage, without needing to develop and maintain complex storage software. The drive makers (of both spinners and SSDs) want to love Ceph because it turns their drive components into a storage system. It also lowers the cost of the software and controller components of storage, leaving more money to spend on drives and flash.

 

Ceph, Meh!
On the other hand, many established storage hardware and software vendors hope Ceph will fade into obscurity. Vendors who already developed richly featured software don’t like it because it’s cheaper competition and applies downward price pressure on their software. Those who sell tightly coupled storage hardware and software fear it because they can’t revise their hardware as quickly or sell it as cheaply as the commodity server vendors used by most Ceph customers.

 

To be honest, Ceph isn’t perfect for everyone. It’s not the most efficient at using flash or CPU (but it’s getting better), the file storage feature isn’t fully mature yet, and it is missing key efficiency features like deduplication and compression. And some customers just aren’t comfortable with open-source or software-defined storage of any kind. But every release of Ceph adds new features and improved performance, while system integrators build turnkey Ceph appliances that make it easy to deploy and come with integrated hardware and software support.
What’s Next for Ceph (and this Blog)?

Ceph continues to evolve, backed by both Red Hat (which acquired Inktank in 2014) and by a community of users and vendors who want  to see it succeed.  In every release it gets faster, gains new features, and becomes easier to manage.  In my next blog, I’ll explain how Mellanox makes Ceph go faster, what we’re contributing to the Ceph product, and the testing we’ve been doing with Red Hat and other partners such as Supermicro and SanDisk.

 

In the meantime, to learn the latest news about Ceph and what Mellanox has been doing with Red Hat and for Ceph, come to Red Hat Summit in Boston. We’ll be showing off some of our Ceph work in booth #321, open June 23-25, 2015.

 

RESOURCES

About John F. Kim

John Kim is Director of Storage Marketing at Mellanox Technologies, where he helps storage customers and vendors benefit from high performance interconnects and RDMA (Remote Direct Memory Access). After starting his high tech career in an IT helpdesk, John worked in enterprise software and networked storage, with many years of solution marketing, product management, and alliances at enterprise software companies, followed by 12 years working at NetApp and EMC. Follow him on Twitter: @Tier1Storage

Comments are closed.