Tag Archives: RDMA

Storage Spaces Direct: If Not RDMA, Then What? If Not Mellanox, Then Who?

Over the past couple years, we have witnessed significant architectural changes affecting modern data center storage systems. These changes have had a dramatic effect, as they have practically replaced traditional Storage Area Network (SAN), which has been the dominant solution for over a decade.

 

When analyzing the market trends that led to this change, it becomes very clear that virtualization is the main culprit. The SAN architecture was very efficient when only one workload was accessing the storage array, but it has become much less efficient in a virtualized environment in which different workloads arrive from different independent Virtual Machines (VMs).

 

To better understand this concept, let’s use a city’s traffic light system as an analogy to a data center’s data traffic. In this analogy, the cars are the data packets (coming in different sizes), and the traffic lights are the data switches. Before the city programs a traffic light’s control, it conducts a thorough study of the traffic patterns of that intersection and the surrounding area.

 

Continue reading

Double Your Storage System Efficiency

Enable Higher IOPS while Maximizing CPU Utilization

As virtualization is now a standard technology in the modern data center, IT managers are  now seeking ways to increase efficiency by adopting new architectures and technologies that enable faster data processing and execute more jobs over the same infrastructure, thereby lowering the cost per job. Since CPUs and storage systems are the two main contributors to infrastructure cost, using fewer CPU cycles and accelerating access to storage are keys toward achieving higher efficiency.

 

The ongoing demand to support mobility and real-time analytics of constantly increasing amounts of data demands that new architectures and technologies be used, specifically those with smarter usage of expensive CPU cycles and as a replacement of old storage systems that were very efficient in the past, but that have become hard to manage and extremely expensive to scale in modern virtualized environments.

 

With an average cost of $2,500 per CPU, about 50% of compute server cost is due to the CPUs.  On the other hand, the I/O controllers cost less than $100. Thus, offloading tasks from the CPU to the I/O controller frees expensive CPU cycles, increasing the overall server efficiency. Other expensive components, such as SSD, will therefore not need to wait the extra cycles for the CPU. This means that using advanced I/O controllers with offload engines results in a much more balanced system that increases the overall infrastructure efficiency.

 

Continue reading

How-old.net and the Driving Forces behind the Age of Machine Learning

I am on a business trip and had dinner with a few coworkers last night. During dinner, one of them proudly pulled out his smartphone and bragged about how young how-old.net thinks he is. Indeed, the age that how-old.net spat out was about 2/3 of his real age.

 

Of course, he had to take everyone’s picture and we had a good laugh about the results. Moreover, right before I started this business trip a couple days ago, I had multiple friends posting similar pictures online from how-old.net, it had gone viral! In case you haven’t tried, here is how it looks:

 

How-Old.net

 

Continue reading

How to Achieve Higher Efficiency in Software Defined Networks (SDN) Deployments

During the last couple of years, the networking industry has invested a lot of effort into developing Software Defined Network (SDN) technology, which is drastically changing data center architecture and enabling large-scale clouds without significantly escalating the TCO (Total Cost of Ownership).

 

The secret of SDN is not that it enables control of data center traffic via software–it’s not like IT managers were using screwdrivers before to manage the network–but rather that it affords the ability to decouple the control path from the data path.  This represents a major shift from the traditional data center networking architecture and therefore offers agility and better economics in modern deployments.

 

For readers who not familiar with SDN, a simple example can demonstrate the efficiency that SDN provides:   Imagine a traffic light that makes its own decisions as to when to change and sends data to the other lamps.  Now imagine if that were changed to a centralized control system that takes a global view of the entire traffic pattern throughout the city and therefore makes smarter decisions on how to route the traffic.

 

The centralized control unit tells each of the lights what to do (using a standard protocol), reducing the complexity of the local units while increasing overall agility. For example, in an emergency, the system can reroute traffic and allow rescue vehicles faster access to the source of the issue.

 

 Tokyo Traffic Control Center;  Photo Courtesy of @CScoutJapan
Tokyo Traffic Control Center, @CScoutJapan

Continue reading

Establishing a High Performance Cloud with Mellanox CloudX

When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI).  NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.

 

NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions.  NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 1 of 3)

Transport Layer Innovation: RDMA

During my undergraduate days at UC Berkeley in the 1980’s, I remember climbing through the attic of Cory Hall running 10Mbit/sec coaxial cables to professors’ offices. Man, that 10base2 coax was fast!! Here we are in 2014 right on the verge of 100Gbit/sec networks. Four orders of magnitude increase in bandwidth is no small engineering feat, and achieving 100Gb/s network communications requires innovation at every level of the seven layer OSI model.

To tell you the truth, I never really understood the top three layers of this OSI model: I prefer the TCP/IP model which collapses all of them into a single “Application” layer which makes more sense. Unfortunately, it also collapses the Link layer and the Physical layer and I actually don’t think this makes sense to combine these two.  I like to build my own ‘hybrid’ model that collapses the top three layers into an Application layer but allows you to consider the Link and Physical layers separately.

Kevin D Blog 081814 Fig1

It turns out that a tremendous amount of innovation is required in these bottom four layers to achieve effective 100Gb/s communications networks. The application layer needs to change as well to fully take advantage of 100Gb/s networks.   For now we’ll focus on the bottom four layers. Continue reading

How RDMA Increases Virtualization Performance Without Compromising Efficiency

Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides.  This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.

One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services.  Important features like high availability (HA) or VM migration can’t be done easily.  Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.

As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.

Continue reading

RoCE in the Data Center

Today’s data centers demand that the underlying interconnect provide the utmost bandwidth and extremely low latency. While high bandwidth is important, it is not worth much without low latency. Moving large amounts of data through a network can be achieved with TCP/IP, but only RDMA can produce the low latency that avoids costly transmission delays.

The speedy transfer of data is critical to it being used efficiently. Interconnect based on Remote Direct Memory Access (RDMA) offers the ideal option for boosting data center efficiency, reducing overall complexity, and increasing data delivery performance. Mellanox RDMA enables sub-microsecond latency and up to 56Gb/s bandwidth, translating to screamingly fast application performance, better storage and data center utilization, and simplified network management.

Continue reading

Building an Enterprise Class Big Data Solution with IBM BigInsights, IBM GPFS, FPO and Mellanox RDMA

Big Data solutions such as Hadoop and NoSQL applications are no longer a sole game for Internet moguls. Today’s retail, transportation and entertainment corporations use Big Data practices such as Hadoop for data storage and data analytics.

IBM BigInsights makes Big Data deployments an easier task for the system architect. BigInsights with IBM’s GPFS-FPO file system support provides enterprise level Big Data solution, eliminating Single Point of Failure structures and increasing ingress and analytics performance.

The inherent RDMA support in IBM’s GPFS takes the performance aspect a notch higher. The testing conducted at Mellanox Big Data Lab with IBM BigInsights 2.1, GPFS-FPO and FDR 56Gbps InfiniBand showed an increased performance for write and read of 35% and 50 %, respectively, comparing to a vanilla HDFS deployment. On the analytics benchmarks, the system provided 35% throughput gain by enabling the RDMA feature.

Continue reading

How Scale-Out Systems Affect Amdahl’s Law

Amdahl's LawIn 1967, Gene Amdahl developed a formula that calculates the overall efficiency of a computer system by analyzing how much of the processing can be parallelized and the amount of parallelization that can be applied in the specific system.

At that time, deeper performance analysis had to take into consideration the efficiency of three main hardware resources that are needed for the computation job: the compute, memory and storage.

On the compute side, efficiency has to be measured by how many threads can run in parallel (which depends on the number of cores).  The memory size affects the percentage of IO operation that needs to access the storage, which slows significantly the execution time and the overall system efficiency.

Those three hardware resources worked very well until the beginning of 2000. At that time, the computer industry started to use a grid-computing or as it known today, scale-out systems.  The benefits of the scale-out architecture are clear. It enables building systems with higher performance, easy to scale with built-in high availability at a lower cost. However, the efficiency of those systems heavily depend on the performance and the resiliency of the interconnect solution.

The importance of the Interconnect became even bigger in the virtualized data center, where the amount of east west traffic continues to grow (as more parallel work is being done). So, if we want to use Amdahl’s law to analyze the efficiency of the scale-out system, in addition to the three traditional items (compute, memory & storage) the fourth item, which is the Interconnect, has to be considered as well.

Continue reading