Deploying Ceph with High Performance Networks

As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge.  Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.

Ceph_Logo_Standard_RGB_120411_fa

In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.

 

Ceph Architecture

Continue reading

Mellanox Open Enrollment Training: A Continuous Learning Solution

One of the most important value-add solutions that Mellanox provides to its customers and partners is Educational Services.  We offer a variety of learning methods to our partners, customers and other technology leaders.

 

MA Map Revised

One of the most successful learning platforms to our customers is our open enrollment courses.  These 3-4 day instructor led courses are available worldwide:   the United Kingdom, Germany, France, Israel, Australia, China and in the US: New York, California, Massachusetts and Washington.  Soon we will offer an “After hours, virtual format”, meaning the students will gain the benefit of a blended (remote instructor led along with online training) learning format, allowing participants flexibility to take the course and still not miss many working hours.

Continue reading

Recap: OpenStack Summit 2014 – Atlanta, GA

This past week in Atlanta, I got the chance to attend the sessions, presented and exhibited at the OpenStack Summit.  The Summit was attended by over 4,500 registered participants.  Today there are more users than ever!  More than 200 companies have joined the project, and the main contributors of current OpenStack release are Red Hat, HP and IBM.  The OpenStack Foundation has posted a recap video showing some highlights:

 

 

Some themes emerged during the summit.  The new concept of big users becoming major contributors is really taking off.  Big users are becoming major contributors to the project because it means they can move faster as a company.  These big users include large banks, manufacturing, retailers, government agencies, entertainment and everything between.  Instead of spending time trying to convince vendors to add features, these large organizations have realized that they can work with the OpenStack community directly to add those features and move faster as a business as a result.

Continue reading

Building an Enterprise Class Big Data Solution with IBM BigInsights, IBM GPFS, FPO and Mellanox RDMA

Big Data solutions such as Hadoop and NoSQL applications are no longer a sole game for Internet moguls. Today’s retail, transportation and entertainment corporations use Big Data practices such as Hadoop for data storage and data analytics.

IBM BigInsights makes Big Data deployments an easier task for the system architect. BigInsights with IBM’s GPFS-FPO file system support provides enterprise level Big Data solution, eliminating Single Point of Failure structures and increasing ingress and analytics performance.

The inherent RDMA support in IBM’s GPFS takes the performance aspect a notch higher. The testing conducted at Mellanox Big Data Lab with IBM BigInsights 2.1, GPFS-FPO and FDR 56Gbps InfiniBand showed an increased performance for write and read of 35% and 50 %, respectively, comparing to a vanilla HDFS deployment. On the analytics benchmarks, the system provided 35% throughput gain by enabling the RDMA feature.

Continue reading

Mellanox Powers EMC Scale-Out Storage

This week is EMC World, a huge event with tens of thousands of customers, partners, resellers and EMC employees talking about cloud, storage, and virtualization. EMC sells many storage solutions but most of the excitement and recent growth (per the latest EMC earnings announcement) are about scale-out storage, including EMC’s Isilon, XtremIO, and ScaleIO solutions.

As mentioned in my blog on the four big changes in storage, traditional scale-out storage connects many storage controllers together, while the new scale-out server storage links the storage on many servers. In both designs the disk or flash on all the nodes in each node is viewed and managed as one large pool of storage. Instead of having to manually partition and assign workloads to different storage systems, workloads can be either shifted seamlessly from node to node (no downtime) or distributed across the nodes.

Clients connect to (scale-out storage) or run on (scale-out server storage) different nodes but must be able to access storage on other nodes as if it were local. If I’m connecting to node A, I need rapid access to the storage on node A, B, C, D, and all the other nodes in the cluster. The system may also migrate data from one node to another, and rapidly exchange metadata or control traffic to keep track of who has which data.

Continue reading

How Scale-Out Systems Affect Amdahl’s Law

Amdahl's LawIn 1967, Gene Amdahl developed a formula that calculates the overall efficiency of a computer system by analyzing how much of the processing can be parallelized and the amount of parallelization that can be applied in the specific system.

At that time, deeper performance analysis had to take into consideration the efficiency of three main hardware resources that are needed for the computation job: the compute, memory and storage.

On the compute side, efficiency has to be measured by how many threads can run in parallel (which depends on the number of cores).  The memory size affects the percentage of IO operation that needs to access the storage, which slows significantly the execution time and the overall system efficiency.

Those three hardware resources worked very well until the beginning of 2000. At that time, the computer industry started to use a grid-computing or as it known today, scale-out systems.  The benefits of the scale-out architecture are clear. It enables building systems with higher performance, easy to scale with built-in high availability at a lower cost. However, the efficiency of those systems heavily depend on the performance and the resiliency of the interconnect solution.

The importance of the Interconnect became even bigger in the virtualized data center, where the amount of east west traffic continues to grow (as more parallel work is being done). So, if we want to use Amdahl’s law to analyze the efficiency of the scale-out system, in addition to the three traditional items (compute, memory & storage) the fourth item, which is the Interconnect, has to be considered as well.

Continue reading

See the Elephant’s Room in Vegas!

Las Vegas, Nevada is not only the home of games, art, shows and fun, also serves as home to one of the largest Hadoop clusters in the world!

 

Racks in the Switch SuperNAP - Photo Courtesy of Switch
Racks in the Switch SuperNAP – Photo Courtesy of Switch

During the upcoming 2014 EMC World show, we invite you to join us for an informative tour of SuperNAP, The World’s leader in Data Center EcoSystem Development and home of the 1000-node Hadoop cluster.  In this tour, we will show how a Hadoop cluster is deployed in a co-location data center, maintained and provide analytics tools for a large community of businesses and academic institutes. It will be a great opportunity to learn about actual working cluster workloads, design considerations and available tools for next generation businesses opportunities in Big Data.

Continue reading

Virtual Modular Switch (VMS) Values for Your Data Center

200361973-001

Building a large scale data center is not an easy task and one that includes considerable cost. The larger the cluster is, the larger the core switching element needs to be to carry traffic between servers and storage elements of the data center.

 

Multiple redundancy and distribution mechanisms are needed to avoid network outages, make implementations resilient and reduce the business impact of failed network elements.

 

The Virtual Modular Switch (VMS) solution provides a distributed core element to the data center.  The VMS is logically placed where you would traditionally place a chassis.  Its benefit is targeted for increased resiliency by offering built-in redundancy and distribution of the networking load between multiple elements.

Continue reading

Mellanox and IBM Collaborate to Provide Leading Data Center Solution Infrastructures

Mellanox recently announced a collaboration with IBM to produce a tightly integrated server and storage solutions that incorporate our end-to-end FDR 56Gb/s InfiniBand and 10/40 Gigabit Ethernet interconnect solutions with IBM POWER CPUs.  By combining IBM POWER CPUs with the world’s highest-performance interconnect solution will drive data at optimal rates, maximizing performance and efficiency for all types of applications and workloads, as well as enable dynamic storage solutions to allow multiple applications to efficiently share data repositories.

 162267608

Advances in high-performance applications are enabling analysts, researchers, scientists and engineers to run more complex and detailed simulations and analyses in a bid to gather game-changing insights and deliver new products to market. This is placing greater demand on existing IT infrastructures, driving a need for instant access to resources – compute, storage, and network.

 

Companies are looking for faster and more efficient ways to drive business value from their applications and data.  The combination of IBM processor technologies and Mellanox high-speed interconnect solutions can provide clients with an advanced and efficient foundation to achieve their goals.

Continue reading

4K Video Drives New Demands

This week, Las Vegas hosts the National Association of Broadcasters conference, or NAB Show. A big focus is the technology needed to deliver movies and TV shows using 4K video.

Standard DVD video resolution is 720×480. Blue-ray resolution is 1920×1080. But, thanks to digital projection in movie theatres and huge flat-screen TVs at home, more video today is being shot in 4K (4096×2160) resolutions.  The video is stored compressed but must be streamed uncompressed for many editing, rendering, and other post-production workflows. Each frame has over 8 million pixels and requires 24x greater bandwidth than DVD (4x greater bandwidth than Blue-ray).

 

Bandwidth and network ports required for Uncompressed 4K & 8K video
Bandwidth and network ports required for Uncompressed 4K & 8K video

 

Continue reading