How RDMA Increases Virtualization Performance Without Compromising Efficiency

Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides.  This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.

One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services.  Important features like high availability (HA) or VM migration can’t be done easily.  Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.

As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.

Continue reading

RoCE in the Data Center

Today’s data centers demand that the underlying interconnect provide the utmost bandwidth and extremely low latency. While high bandwidth is important, it is not worth much without low latency. Moving large amounts of data through a network can be achieved with TCP/IP, but only RDMA can produce the low latency that avoids costly transmission delays.

The speedy transfer of data is critical to it being used efficiently. Interconnect based on Remote Direct Memory Access (RDMA) offers the ideal option for boosting data center efficiency, reducing overall complexity, and increasing data delivery performance. Mellanox RDMA enables sub-microsecond latency and up to 56Gb/s bandwidth, translating to screamingly fast application performance, better storage and data center utilization, and simplified network management.

Continue reading

Top Three Network Considerations for Large Scale Cloud Deployments

The rapid pace of change in data and business requirements is the biggest challenge when deploying a large scale cloud.  It is no longer acceptable to spend years designing infrastructure and developing applications capable to cope with data and users at scale. Applications need to be developed in a much more agile manner, but in such a way that allows dynamic reallocation of infrastructure to meet changing requirements.

Choosing an architecture that can scale is critical. Traditional “scale-up” technologies are too expensive and can ultimately limit growth as data volumes grow. Trying to accommodate data growth without proper architectural design, results in un-needed infrastructure complexity and cost.

The most challenging task for the cloud operator in a modern cloud data center supporting thousands or even hundreds-of-thousands of hosts is scaling and automating network services.  Fortunately, server virtualization has enabled automation of routine tasks – reducing the cost and time required to deploy a new application from weeks to minutes.   Yet, reconfiguring the network for a new or migrated virtual workload can take days and cost thousands of dollars.

To solve these problems, you need to think differently about your data center strategy.  Here are three technology innovations that will help data center architects design a more efficient and cost-effective cloud:

1.  Overlay Networks

Overlay network technologies such as VXLAN and NVGRE, make the network as agile and dynamic as other parts of the cloud infrastructure. These technologies enable automated network segment provisioning for cloud workloads, resulting in a dramatic increase in cloud resource utilization.
Overlay networks provide for ultimate network flexibility and scalability and the possibility to:

  • Combine workloads within pods
  • Move workloads across L2 domains and L3 boundaries easily and seamlessly
  • Integrate advanced firewall appliances and network security platform seamlessly

Continue reading

Deploying Ceph with High Performance Networks

As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge.  Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.

Ceph_Logo_Standard_RGB_120411_fa

In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.

 

Ceph Architecture

Continue reading

Mellanox Open Enrollment Training: A Continuous Learning Solution

One of the most important value-add solutions that Mellanox provides to its customers and partners is Educational Services.  We offer a variety of learning methods to our partners, customers and other technology leaders.

 

MA Map Revised

One of the most successful learning platforms to our customers is our open enrollment courses.  These 3-4 day instructor led courses are available worldwide:   the United Kingdom, Germany, France, Israel, Australia, China and in the US: New York, California, Massachusetts and Washington.  Soon we will offer an “After hours, virtual format”, meaning the students will gain the benefit of a blended (remote instructor led along with online training) learning format, allowing participants flexibility to take the course and still not miss many working hours.

Continue reading

Recap: OpenStack Summit 2014 – Atlanta, GA

This past week in Atlanta, I got the chance to attend the sessions, presented and exhibited at the OpenStack Summit.  The Summit was attended by over 4,500 registered participants.  Today there are more users than ever!  More than 200 companies have joined the project, and the main contributors of current OpenStack release are Red Hat, HP and IBM.  The OpenStack Foundation has posted a recap video showing some highlights:

 

 

Some themes emerged during the summit.  The new concept of big users becoming major contributors is really taking off.  Big users are becoming major contributors to the project because it means they can move faster as a company.  These big users include large banks, manufacturing, retailers, government agencies, entertainment and everything between.  Instead of spending time trying to convince vendors to add features, these large organizations have realized that they can work with the OpenStack community directly to add those features and move faster as a business as a result.

Continue reading

Building an Enterprise Class Big Data Solution with IBM BigInsights, IBM GPFS, FPO and Mellanox RDMA

Big Data solutions such as Hadoop and NoSQL applications are no longer a sole game for Internet moguls. Today’s retail, transportation and entertainment corporations use Big Data practices such as Hadoop for data storage and data analytics.

IBM BigInsights makes Big Data deployments an easier task for the system architect. BigInsights with IBM’s GPFS-FPO file system support provides enterprise level Big Data solution, eliminating Single Point of Failure structures and increasing ingress and analytics performance.

The inherent RDMA support in IBM’s GPFS takes the performance aspect a notch higher. The testing conducted at Mellanox Big Data Lab with IBM BigInsights 2.1, GPFS-FPO and FDR 56Gbps InfiniBand showed an increased performance for write and read of 35% and 50 %, respectively, comparing to a vanilla HDFS deployment. On the analytics benchmarks, the system provided 35% throughput gain by enabling the RDMA feature.

Continue reading

Mellanox Powers EMC Scale-Out Storage

This week is EMC World, a huge event with tens of thousands of customers, partners, resellers and EMC employees talking about cloud, storage, and virtualization. EMC sells many storage solutions but most of the excitement and recent growth (per the latest EMC earnings announcement) are about scale-out storage, including EMC’s Isilon, XtremIO, and ScaleIO solutions.

As mentioned in my blog on the four big changes in storage, traditional scale-out storage connects many storage controllers together, while the new scale-out server storage links the storage on many servers. In both designs the disk or flash on all the nodes in each node is viewed and managed as one large pool of storage. Instead of having to manually partition and assign workloads to different storage systems, workloads can be either shifted seamlessly from node to node (no downtime) or distributed across the nodes.

Clients connect to (scale-out storage) or run on (scale-out server storage) different nodes but must be able to access storage on other nodes as if it were local. If I’m connecting to node A, I need rapid access to the storage on node A, B, C, D, and all the other nodes in the cluster. The system may also migrate data from one node to another, and rapidly exchange metadata or control traffic to keep track of who has which data.

Continue reading

How Scale-Out Systems Affect Amdahl’s Law

Amdahl's LawIn 1967, Gene Amdahl developed a formula that calculates the overall efficiency of a computer system by analyzing how much of the processing can be parallelized and the amount of parallelization that can be applied in the specific system.

At that time, deeper performance analysis had to take into consideration the efficiency of three main hardware resources that are needed for the computation job: the compute, memory and storage.

On the compute side, efficiency has to be measured by how many threads can run in parallel (which depends on the number of cores).  The memory size affects the percentage of IO operation that needs to access the storage, which slows significantly the execution time and the overall system efficiency.

Those three hardware resources worked very well until the beginning of 2000. At that time, the computer industry started to use a grid-computing or as it known today, scale-out systems.  The benefits of the scale-out architecture are clear. It enables building systems with higher performance, easy to scale with built-in high availability at a lower cost. However, the efficiency of those systems heavily depend on the performance and the resiliency of the interconnect solution.

The importance of the Interconnect became even bigger in the virtualized data center, where the amount of east west traffic continues to grow (as more parallel work is being done). So, if we want to use Amdahl’s law to analyze the efficiency of the scale-out system, in addition to the three traditional items (compute, memory & storage) the fourth item, which is the Interconnect, has to be considered as well.

Continue reading

See the Elephant’s Room in Vegas!

Las Vegas, Nevada is not only the home of games, art, shows and fun, also serves as home to one of the largest Hadoop clusters in the world!

 

Racks in the Switch SuperNAP - Photo Courtesy of Switch
Racks in the Switch SuperNAP – Photo Courtesy of Switch

During the upcoming 2014 EMC World show, we invite you to join us for an informative tour of SuperNAP, The World’s leader in Data Center EcoSystem Development and home of the 1000-node Hadoop cluster.  In this tour, we will show how a Hadoop cluster is deployed in a co-location data center, maintained and provide analytics tools for a large community of businesses and academic institutes. It will be a great opportunity to learn about actual working cluster workloads, design considerations and available tools for next generation businesses opportunities in Big Data.

Continue reading