Category Archives: Virtualization

Mellanox Collaborates with Dell to Maximize Application Performance in Virtualized Data Centers

Dell Fluid Cache for SAN is enabled by ConnectX®-3 10/40GbE Network Interface Cards (NICs) with Remote Direct Memory Access (RDMA). The Dell Fluid Cache for SAN solution reduces latency and improves I/O performance for applications such as Online Transaction Processing (OLTP) and Virtual Desktop Infrastructure (VDI).

Dell lab tests have revealed that Dell Fluid Cache for SAN can reduce the average response time by 99 percent and achieve four times more transactions per second with a six-fold increase in concurrent users**.

LJ-Miller-071714
Continue reading

Enabling Application Performance in Data Center Environments

Ethernet switches are simple: they need to move packets around from port to port based on the attributes of each packet. There are plenty of switch vendors from which to choose. Differentiating in this saturated market is the aspiration of each vendor.

 

Mellanox Technologies switches are unique in this market. Not just “yet another switch” but a design based on a self-built switching ASIC and a variety of 1RU switches. These switches excel in performance compared to any other switch offered in the market. Being first and (still) the only vendor with a complete end-to-end 40GbE solution, Mellanox provides a complete interconnect solution and the ability to achieve the highest price-performance ratio.

Continue reading

How RDMA Increases Virtualization Performance Without Compromising Efficiency

Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides.  This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.

One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services.  Important features like high availability (HA) or VM migration can’t be done easily.  Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.

As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.

Continue reading

Are Desktops Becoming the World’s Digital Dinosaur?

100329387

It is no secret that recent market trends have forced the traditional desktop to go through a dramatic transformation. It’s also easy to predict that sooner, rather than later, the traditional way of seating and working in front of a desktop will disappear. Why is this happening? Desktops that led the digital revolution and ruled the digital world for more than 30 years are going to experience a sudden death. This reminds me of the way the dinosaurs disappeared. What is the “asteroid” that will destroy such a large and well established infrastructure? Can it be stopped?

Continue reading

The Storage Fabric of the Future Virtualized Data Center

Guest post by Nelson Nahum, Zadara Storage

It is evident that the future data center will be based on cutting-edge software and virtualization technologies to make the most effective use of hardware, compute power, and storage needs to perform essential analytics and to increase the performance of media-related and advanced web applications. And it turns out that the wires that will connect all this technology together are no less crucial to next-generation data centers and clouds than the software and virtualization layers that run within them.


There are multiple storage fabrics and interconnects available today, including Fibre Channel, Ethernet and SAS. Each has various pros and cons, and fabrics were chosen according to need of performance, compatibility and cost efficiencies.

 

As an enterprise storage as-a-service provider, delivering a software-based cloud storage solution for public, private and hybrid cloud models based on commodity hardware, Zadara Storage provides storage as-a-service in multiple public cloud and colocation facilities around the globe. Consistency, high availability and predictability are key in supplying the scalable, elastic service our customers expect, regardless of their location, facility or the public cloud they employ. The hardware we use needs to be dependable, pervasive and cost-efficient in order to sustain the performance and cost-level of our service, anywhere and at any scale.

 

When choosing our fabric, Ethernet came as a clear choice. Ethernet is likely to become the new standard, and boasts several advantages vital to our product:

  • Ethernet’s speed roadmap is aggressive: from 10GbE to 40GbE, and upcoming 100GbE
  • Ethernet is ubiquitous: we can employ it with no complication at any data center or colocation facility around the globe
  • The latency we have found to be more than manageable, specifically as we use advanced techniques such as IO virtualization and data passthrough
  • Ethernet is the most cost effective: an as-a-service company needs to have a competitive pricing edge.

The future of enterprise storage
The future of Enterprise Storage lies in software and a choice of hardware (premium or commodity). Software-defined storage can scale performance more easily and cost effectively than monolithic hardware, and by combining the best of hardware of software, the customer wins. Ethernet is a critical element of our infrastructure, and Mellanox switches offer significant higher performance and consistent dependability that enables our storage fabric and meets our customer’s needs.

 

Zadara Storage at the Mellanox Booth at VM World 2013
Wednesday, August 28, at 2:15pm
At the Mellanox Booth at VM World 2013, Zadara Storage CEO, Nelson Nahum, will present the Zadara™ Storage Cloud, based on the patent-pending CloudFabric™ architecture, and providing a breakthrough cost structure for data centers. Zadara’s software-defined solution employs standard, off-the-shelf x86 servers, and utilizes Ethernet as its only interconnect to provide performant, reliable, SSD- and spindle-based SAN and NAS as a service.

 

About Zadara Storage
An Amazon Web Services and Dimension Data Technology Partner and winner of the VentureBeat, Tie50, Under the Radar, and Plug and Play cloud competitions, Zadara Storage offers enterprise-class storage for the cloud in the form of Storage as a Service (STaaS). With Zadara Storage, cloud storage leapfrogs ahead to provide cloud servers with high-performance, fully configurable, highly available, fully private, tiered SAN and NAS as a service. By combining the best of enterprise storage with the best of cloud and cloud block storage, Zadara Storage accelerates the cloud by enabling enterprises to migrate existing mission-critical applications to the Cloud.

Product Flash – Bridgeworks Potomac 40Gb iSCSI-to-SAS Bridge

Written By: Erin Filliater, Enterprise Market Development Manager

 

The amount of worldwide digital information is growing on a daily basis, and all of that data has to be stored somewhere, usually in external storage infrastructures, systems and devices.  Of course, in order for that information to be useful, you need to have fast access to it when your application calls for it.  Enter Bridgeworks’ newest bridging product, the Potomac ESAS402800 40Gb iSCSI-to-SAS protocol bridge.  The first to take advantage of 40Gb/s data center infrastructures, the ESAS402800 integrates Mellanox 40Gb iSCSI technology to provide the fastest iSCSI SAN connectivity to external SAS devices such as disk arrays, LTO6 tape drives and tape libraries, allowing data center administrators to integrate the newest storage technologies into their environments without disrupting their legacy systems.

In addition to flat-out speed, plug n’ play connectivity and web-based GUI management make the ESAS402800 easy to install and operate.   Adaptive read- and write-forward caching techniques allow the ESAS402800 bridge to share storage effectively in today’s highly virtualized environments.

 

All of this adds up to easier infrastructure upgrades, more effective storage system migration and realization of the full performance potential of new SAS-connected storage systems. Pretty impressive for a single device.

 

Find out more about the recent Potomac ESAS402800 40Gb iSCSI-to-SAS bridge launch at Bridgeworks’ website:

http://www.4bridgeworks.com/news_and_press_releases/press_releases.phtml?id=252&item=26

Interconnect analysis: InfiniBand and 10GigE in High-Performance Computing

InfiniBand and Ethernet are the leading interconnect solutions for connecting servers and storage systems in high-performance computing and in enterprise (virtualized or not) data centers. Recently, the HPC Advisory Council has put together the most comprehensive database for high-performance computing applications to help users understand the performance, productivity, efficiency and scalability differences between InfiniBand and 10 Gigabit Ethernet.

In summary, there are a large number of HPC applications that need the lowest possible latency for best performance or the highest bandwidth (for example Oil&Gas applications as well as weather related applications). There are some HPC applications that are not latency sensitive. For example, gene sequencing and some bioinformatics applications are not sensitive to latency and scale well with TCP-based networks including GigE and 10GigE. For HPC converged networks, putting HPC message passing traffic and storage traffic on a single TCP network may not provide enough data throughput for either. Finally, there is a number of examples that show 10GigE has limited scalability for HPC applications and InfiniBand proves to be a better performance, price/performance, and power solution than 10GigE.

The complete report can be found under the HPC Advisory Council case studies or by clicking here.

Thanks for coming to see us at VMworld

VMworld was everything we expected and more. The traffic was tremendous and we had a lot of excitement and buzz in our booth (especially after we won the Best of VMworld in the Cloud Computing category). Just in case you were unable to sit through one of Mellanox’s presentations, or from one of our partners (Xsigo, HP, Intalio, RNA Networks, and OpenFabrics Alliance), we went ahead and video taped the sessions, and have posted them below.

 

 Mellanox – F.U.E.L. Efficient Virtualized Data Centers

 

 Mellanox – On-Demand Network Services

 

 Intalio – Private Cloud Platform

 

 HP BladeSystem and ExSO SL-Series

 

 Xsigo – How to Unleash vSphere’s Full Potential with Xsigo Virtual I/O

 

 RNA Networks – Virtual Memory

 

 OpenFabrics Alliance – All things Virtual with OpenFabrics and IB

Missed Mellanox at Interop?

Just in case you missed us at Interop 2009, below are just a few of the presentations that took place in our booth.

Mellanox 10 Gigabit Ethernet and 40Gb/s InfiniBand adapters, switches and gateways are key to making your data center F.U.E.L. Efficient

 

Mellanox Product Manager, Satish Kikkeri, provides additional details on Low-Latency Ethernet

 

Mellanox Product Manager, TA Ramanujam, provides insight on how data centers can achieve true unified I/O today

 

Fusion-io’s CTO, David Flynn, presents “Moving Storage to Microsecond Time-Scales”

 

We look forward to seeing you at our next event or tradeshow.

Brian Sparks
brian@mellanox.com

I/O Virtualization

I/O virtualization is a complimentary solution for server and storage virtualization, which aims to reduce the management complexity of physical connections in and out of virtual hosts. Virtualized data center clusters will have multiple networking connections to LAN and SAN, and virtualizing the network avoids the extra complexity associated with it. While I/O virtualization reduces the management complexity, in order to maintain high productivity and scalability one should pay attention to other characteristics of the network being virtualized.

Offloading the network virtualization from the VMM (virtual machine manager, e.g. Hypervisor) to a smart networking adapter, not only reduces the CPU overhead associated with the virtualization management, but also increases the performance capability of the virtual machines (or guest OSs) and can provide the native performance capabilities to them.

The PCISIG has standards in place to help simplify I/O virtualization. The most interesting solution is Single Root I/O virtualization (SR-IOV). SR-IOV allows a smart adapter to create multiple virtual adapters (virtual functions) for a given physical server. The virtual adapters can be assigned directly to a virtual machine (VM) instead of relying on the VMM to manage everything.

SR-IOV provides a standard mechanism for devices to advertise their ability to be simultaneously shared among multiple virtual machines. SR-IOV allows the partitioning of PCI functions into many virtual interfaces for the purpose of sharing the resources of a PCI device in a virtual environment.

Mellanox interconnect solutions provide full SR-IOV support while adding the required scalability and high throughput capabilities to effectively support multiple virtual machines on a single physical server. With Mellanox 10GigE or 40Gb/s InfiniBand solutions, each of the virtual machines can get the needed bandwidth allocation to ensure highest productivity and performance, just as if it was a physical server. 

Gilad Shainer
Director of Technical Marketing
gilad@mellanox.com