Monthly Archives: April 2009

I/O Virtualization

I/O virtualization is a complimentary solution for server and storage virtualization, which aims to reduce the management complexity of physical connections in and out of virtual hosts. Virtualized data center clusters will have multiple networking connections to LAN and SAN, and virtualizing the network avoids the extra complexity associated with it. While I/O virtualization reduces the management complexity, in order to maintain high productivity and scalability one should pay attention to other characteristics of the network being virtualized.

Offloading the network virtualization from the VMM (virtual machine manager, e.g. Hypervisor) to a smart networking adapter, not only reduces the CPU overhead associated with the virtualization management, but also increases the performance capability of the virtual machines (or guest OSs) and can provide the native performance capabilities to them.

The PCISIG has standards in place to help simplify I/O virtualization. The most interesting solution is Single Root I/O virtualization (SR-IOV). SR-IOV allows a smart adapter to create multiple virtual adapters (virtual functions) for a given physical server. The virtual adapters can be assigned directly to a virtual machine (VM) instead of relying on the VMM to manage everything.

SR-IOV provides a standard mechanism for devices to advertise their ability to be simultaneously shared among multiple virtual machines. SR-IOV allows the partitioning of PCI functions into many virtual interfaces for the purpose of sharing the resources of a PCI device in a virtual environment.

Mellanox interconnect solutions provide full SR-IOV support while adding the required scalability and high throughput capabilities to effectively support multiple virtual machines on a single physical server. With Mellanox 10GigE or 40Gb/s InfiniBand solutions, each of the virtual machines can get the needed bandwidth allocation to ensure highest productivity and performance, just as if it was a physical server. 

Gilad Shainer
Director of Technical Marketing
gilad@mellanox.com

High-Performance Computing as a Service (HPCaaS)

High-performance clusters bring many advantages to the end user, including flexibility and efficiency. With the increasing number of applications being served by high-performance systems, new systems need to serve multiple users and applications. Traditional high-performance systems typically served a single application at a given time, but to maintain maximum flexibility a new concept of “HPC as a Service” (HPCaaS) has been developed. HPCaaS includes the capability of using clustered servers and storage as resource pools, a web interface for users to submit their job requests, and a smart scheduling mechanism that can schedule multiple different applications simultaneously on a given cluster taking into consideration the different application characteristics for maximum overall productivity.

HPC as a Service enables greater system flexibility since it eliminates the need for dedicated hardware resources per application and allows dynamic allocation of resources per given task while maximizing productivity. It is also the key component in bringing high-performance computing into cloud computing. Effective HPCaaS though, needs to take into consideration the application’s demands and provide the minimum hardware resources required per application. The scheduling of runs of multiple applications at once requires the proper balance of resources for each application proportional to their demands.

Research activities on HPCaaS are being performed at the HPC Advisory Council (http://hpcadvisorycouncil.mellanox.com/). The results show the need for high-performance interconnects, such as 40Gb/s InfiniBand, to maintain high productivity levels. It was also shown that scheduling mechanisms can be set to guarantee same levels of productivity in HPCaaS versus the “native” dedicated hardware approach. HPCaaS is not only critical for the way we will perform high-performance computing in the future, but as more HPC elements are brought into the data center, it will become an important factor when building the most efficient enterprise data centers.

Gilad Shainer
Director, Technical Marketing
gilad@mellanox.com

SSD over InfiniBand

Last week I was at Storage Networking World in Orlando, Florida.  The sessions were a lot better organized with focus on all the popular topics like Cloud Computing, Storage Virtualization and Solid State Storage (SSD).  In our booth, we demonstrated our Layer 2 agnostic storage supporting iSCSI, FCoE (Fibre Channel over Ethernet) and SRP (SCSI RDMA Protocol) all coexisting in a single network. We partnered with Rorke Data who demonstrated a 40Gb/s InfiniBand-based storage array and Texas Memory System’s ‘World’s Fastest Storage’ in our booth demonstrating sustained rates of 3Gb/s and over 400K I/Os using Solid State Drives. 

I attended few of the sessions on the SSD and Cloud Computing stream. SSD was my favorite topic primarily because InfiniBand and SSD together will provide the highest storage performance and has the potential to carve out a niche in the data center OLTP applications market. Clod Barrera, IBM’s Chief Technical Storage Strategist’s presentation on SSD was very good. He had a chart which talked about how HDD I/O rates per GByte had dropped so low and currently staying constant at around 150 to 200 I/Os per drive. On the contrary SSD’s have capability to produce 50K I/Os on Read and 17K I/Os on Write.  Significant synergy can be achieved by combining SSD with InfiniBand technology. InfiniBand delivers the lowest latency of sub 1us and the highest bandwidth of 40Gb/s.  The combination of these technologies will provide significant value in the datacenter and has the potential to change the database and OLTP storage infrastructure.

SSD over InfiniBand delivers:

-  Ultra-fast, lowest latency infrastructure for transaction processing applications

-  Delivering a more compelling Green per GB 

-   Faster recovery time for business continuity applications

-   Disruptive scaling

I see lot of opportunity for InfiniBand technology in the storage infrastructure as SSD provides the much needed discontinuity to the rotary media. 

TA Ramanujam (TAR)
tar@mellanox.com