All posts by Motti Beck

About Motti Beck

Motti Beck is Director of Marketing, EDC market segment at Mellanox Technologies inc. Before joining Mellanox Motti was a founder of several setup companies including BindKey Technologies that was acquired by DuPont Photomask (today Toppan Printing Company LTD) and Butterfly Communications that was acquired by Texas Instrument. Prior to that he was a Business Unit Director at National Semiconductors. Motti hold B.Sc in computer engineering from the Technion - Israel Institute of Technology. Follow Motti on Twitter: https://twitter.com/mottibeck

How RDMA Increases Virtualization Performance Without Compromising Efficiency

Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides.  This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.

One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services.  Important features like high availability (HA) or VM migration can’t be done easily.  Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.

As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.

Continue reading

How Scale-Out Systems Affect Amdahl’s Law

Amdahl's LawIn 1967, Gene Amdahl developed a formula that calculates the overall efficiency of a computer system by analyzing how much of the processing can be parallelized and the amount of parallelization that can be applied in the specific system.

At that time, deeper performance analysis had to take into consideration the efficiency of three main hardware resources that are needed for the computation job: the compute, memory and storage.

On the compute side, efficiency has to be measured by how many threads can run in parallel (which depends on the number of cores).  The memory size affects the percentage of IO operation that needs to access the storage, which slows significantly the execution time and the overall system efficiency.

Those three hardware resources worked very well until the beginning of 2000. At that time, the computer industry started to use a grid-computing or as it known today, scale-out systems.  The benefits of the scale-out architecture are clear. It enables building systems with higher performance, easy to scale with built-in high availability at a lower cost. However, the efficiency of those systems heavily depend on the performance and the resiliency of the interconnect solution.

The importance of the Interconnect became even bigger in the virtualized data center, where the amount of east west traffic continues to grow (as more parallel work is being done). So, if we want to use Amdahl’s law to analyze the efficiency of the scale-out system, in addition to the three traditional items (compute, memory & storage) the fourth item, which is the Interconnect, has to be considered as well.

Continue reading

Are Desktops Becoming the World’s Digital Dinosaur?

100329387

It is no secret that recent market trends have forced the traditional desktop to go through a dramatic transformation. It’s also easy to predict that sooner, rather than later, the traditional way of seating and working in front of a desktop will disappear. Why is this happening? Desktops that led the digital revolution and ruled the digital world for more than 30 years are going to experience a sudden death. This reminds me of the way the dinosaurs disappeared. What is the “asteroid” that will destroy such a large and well established infrastructure? Can it be stopped?

Continue reading

How to Increase Virtual Desktop Infrastructure (VDI) Efficiency

Every IT professional’s goal is to improve TCO. In a Virtual Desktop Infrastructure (VDI) application, the objective is to increase the efficiency by maximizing the number of virtual desktops per server while maintaining response times to users that would be comparable to a physical desktop. In addition, the solution must be resilient since downtime of the VDI application causes the idling of hundreds to thousands of users and consequently reduces overall organizational productivity and increases user frustration.

Low latency data requests from storage or other servers are the key to enabling more VDI sessions without increasing user response times. Legacy Fibre Channel-connected storage subsystems provide shared storage which enables moving virtual machines between physical servers. Leveraging an existing Ethernet infrastructure saves costs by combining networking and storage I/O over the same cable. iSCSI Extensions for RDMA (iSER) is a computer network protocol that extends the Internet Small Computer System Interface (iSCSI) protocol to use Remote Direct Memory Access (RDMA). It basically uses the upper layers of iSCSI for session management, discovery, recovery, etc., and thus compatible with all the features and functions supported by iSCSI. However, using iSER eliminates the bottleneck through the following mechanisms:

  • Uses zero copy via RDMA technology
  • CRC is calculated by hardware
  • Works with message boundaries instead of streams
  • The transport protocol is implemented in hardware (minimal CPU cycles per IO)

Motti - diagram2 for blog 091313

Recently, at VMworld’13, LSI Corporation and Mellanox Technologies presented a joint solution that accelerates the access storage. The solution includes LSI’s Nytro MegaRAID NMR 8110-4i card which has 200GB of on-card flash and eight SAS HDDs and Mellanox’s ConnectX®-3 Pro adapter supports 10Gb/s RoCE storage connectivity between the servers. VDI performance (over TCP/IP and RoCE) was measured using Login VSI’s VDI load generator which creates the actual workload of a typical Windows user using Microsoft Office.

Running Login VSI showed that when running over 10GE TCP/IP only 65 virtual desktop responded within 5 seconds or less, versus 140 when running over 10GE RoCE. This translates into more than 2X cost saving of the VDI hardware infrastructure and proven to be an excellent economical alternative to legacy Fibre Channel based storage subsystems.

 mottibeck Author:  Motti Beck is the Director of Marketing, Enterprise Data Center market segment at Mellanox Technologies, Inc. Before joining Mellanox, Motti was a founder of several setup companies including BindKey Technologies that was acquired by DuPont Photomask (today Toppan Printing Company LTD) and Butterfly Communications that was acquired by Texas Instruments. Prior to that, he was a Business Unit Director at National Semiconductors. Motti holds a B.Sc in computer engineering from the Technion – Israel Institute of Technology.