Tag Archives: Ceph

OpenStack Summit Austin 2016

The OpenStack Summit is a five-day conference for developers, users, and administrators of OpenStack cloud software. Held every six months, the conference schedule rotates based on the OpenStack software release cycle.  This week, the summit is being held in Austin, Texas at the Austin Convention Center.

keynote

The summit started yesterday and we had two successful sessions:

Open Composable Networks: Leverage LEGO Design to Transform Cloud Networking by Kevin Deierling, Mellanox VP Marketing

Kevin talked about a new way of cloud networking that stemmed from the hyper-scale cloud web services giants, but is being made available widely by Mellanox and our cloud ecosystem partners. He shared real world deployments of our OpenStack customers such as Cambridge, Enter and NCI and described the LEGO parts they have used such as: Mellanox NEO, our End-to-End 25/50/100G Ethernet and InfiniBand intelligent interconnect, etc…

Lightning Talks by Moshe Levi, SW Cloud Manager about Using Device Emulator to Enhance CI

Moshe talked about Mellanox SimX and explained how to reduce the number of physical servers and eliminate the physical device dependency in CI.

We invite you to visit Mellanox’s booth (D20) and see the 25/50/100G Cloud Solution based on Spectrum, ConnectX-4 and Mellanox NEO for Network Automation. Make sure to stop by and talk with us!  Here are some photos from yesterday’s sessions along with the Mellanox booth.

session1

session2

booth

Making Ceph Faster: Lessons From Performance Testing

In my first blog on Ceph I explained what it is and why it’s hot; in my second blog on Ceph I showed how faster networking can enable faster Ceph performance (especially throughput).  But many customers are asking how to make Ceph even faster. And recent testing by Red Hat and Mellanox, along with key partners like Supermicro, QCT (Quanta Cloud technology), and Intel have provided more insight into increasing Ceph performance, especially around IOPS-sensitive workloads.

John Kim 021716 cheetah
Figure 1: Everyone wants Ceph to go FASTER

 

Different Data in Ceph Imposes Different Workloads

Ceph can be used for block or object storage and different workloads. Usually, block workloads consist of smaller, random I/O, where data is managed in blocks ranging from 1KB to 64KB in size. Object storage workloads usually offer large, sequential I/O with data chunks ranging from 16KB to 4MB in size (and individual objects can be many gigabytes in size). The stereotypical small, random block workload is a database such as MySQL or active virtual machine images. Common object data include archived log files, photos, or videos. However in special cases, block I/O can be large and sequential (like copying a large part of a database) and object I/O can be small and random (like analyzing many small text files).

 

The different workloads put different requirements on the Ceph system. Large sequential I/O (usually objects) tends to stress the storage and network bandwidth for both reads and writes. Small random I/O (usually blocks) tends to stress the CPU and memory of the OSD server as well as the storage and network latency. Reads usually require fewer CPU cycles and are more likely to stress storage and network bandwidth, while writes are more likely to stress the CPUs as they calculate data placement. Erasure Coding writes require more CPU power but less network and storage bandwidth.

Continue reading

OpenStack Summit Tokyo 2015

Dudu Slama 102815 tokyo-banner-homeThe OpenStack Summit is a four-day conference for developers, users,  and administrators of OpenStack cloud software. Held every six months, the conference schedule rotates based on the OpenStack software release cycle.  This week, the summit is being held in Tokyo, Japan at the Grand Prince International Convention Center.

Today, we had a common session (with Irena Berezovsky – Midokura, Livnat Peer – Red Hat) about Quality of services in Cloud. I presented a customer use case and talk about Mellanox NEO, Containers, Virtualization, Auto Provisioning and SR-IOV LAG.

Tomorrow is the last chance to visit Mellanox’s booth (S8) and see the 100Gbps Cloud Solution based on Spectrum, ConnectX-4 and Ceph RDMA. Make sure to stop by and talk with us!  Here are some photos from today’s session along with the Mellanox booth:

Dudu Slama 102815 Pic1


Dudu Slama 102815 Pic3


Dudu Slama 102815 Pic5

 

Dudu Slama 102815 Pic6

 

Dudu Slama 102815 Pic4

A Good Network Connects Ceph To Faster Performance

In my first blog on Ceph, I explained what it is and why it’s hot. But what does Mellanox, a networking company, have to do with Ceph, a software-defined storage solution?  The answer lies in the Ceph scale-out design. And some empirical results are found in the new “Red Hat Ceph Storage Clusters on Supermicro storage servers” reference architecture published August 10th.

 

Ceph has two logical networks, the client-facing (public) and the cluster (private) networks. Communication with clients or application servers is via the former while replication, heartbeat, and reconstruction traffic run on the latter. You can run both logical networks on one physical network or separate the networks if you have a large cluster or lots of activity.

John Kim 081915 Fig1

Figure 1: Logical diagram of the two Ceph networks

Continue reading

Ceph Is A Hot Storage Solution – But Why?

In talks with customers, server vendors, the IT press, and even within Mellanox, one of the hottest storage topics is Ceph. You’ve probably heard of it and many big customers are implementing it or evaluating it. But I am also frequently asked the following:

  • What is Ceph?
  • Why is it a hot topic in storage?
  • Why does Mellanox, a networking company, care about Ceph, and why should Ceph customers care about networking?

I’ll answer #1 and #2 in this blog and #3 in another blog.

 

OLYMPUS DIGITAL CAMERA

Figure 1: A bigfin reef squid (Sepioteuthis lessoniana) of the Class Cephalopoda

 

Continue reading

Deploying Hadoop on Top of Ceph, Using FDR InfiniBand Network

We recently posted a whitepaper on “Deploying Ceph with High Performance Networks” using Ceph as a block storage device.  In this post, we review the advantages of using CephFS as an alternative for HDFS.

Hadoop has become a leading programming framework in the big data space. Organizations are replacing several traditional architectures with Hadoop and use it as a storage, data base, business intelligence and data warehouse solution. Enabling a single file system for Hadoop and other programming frameworks benefits users who need dynamic scalability of compute and or storage capabilities.

Continue reading

Deploying Ceph with High Performance Networks

As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge.  Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.

Ceph_Logo_Standard_RGB_120411_fa

In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.

 

Ceph Architecture

Continue reading