Category Archives: Cloud Computing

Accelerating Red Hat’s new OpenStack cloud platform with Mellanox Interconnect

Red Hat Enterprise Linux OpenStack Platform is a new leading Infrastructure-as-a-Service (IaaS) open-source solution for building and deploying cloud-enabled workloads. This new cloud platform gives customers the agility to scale and quickly meet customer demands without compromising on availability, security, or performance.

Red Hat built an industry leading certification program for their OpenStack platform. By achieving this technology certification, partners can assure customers that their solutions have been validated with Red Hat OpenStack technology.  Anyone who earns this new certification will be able to show that they can accomplish the following tasks:

•             Install and configure Red Hat Enterprise Linux OpenStack Platform.

•             Manage users, projects, flavors, and rules.

•             Configure and manage images.

•             Add compute nodes.

•             Manage storage using Swift and Cinder.

 

Mellanox is listed in the Red Hat marketplace as a certified Hardware partner for Networking (Neutron) and Block Storage (Cinder) services. This ensures that Mellanox ConnectX-3 hardware was tested, certified, and now supported with Red Hat OpenStack technology.

Mellanox Technologies offers seamless integration between its products and Red Hat OpenStack services and provides unique functionality that includes application and storage acceleration, network provisioning, automation, hardware-based security, and isolation. Furthermore, using Mellanox interconnect products allows cloud providers to save significant capital and operational expenses through network and I/O consolidation and by increasing the number of virtual machines (VMs) per server.

With Mellanox ConnectX-3 card and OpenStack plugins, customers will benefit from superior performance and native integration with Neutron:

 

Mellanox OpenStack solution extends the Cinder project by adding iSCSI running over RDMA (iSER). Leveraging RDMA, Mellanox OpenStack delivers 5x better data throughput (for example, increasing from 1GB/s to 5GB/s) and requires up to 80% less CPU utilization.

Eli Blog 092013 Img1

 

Mellanox ConnectX-3 adapters equipped with onboard embedded switch (eSwitch) are capable of performing layer-2 switching for the different VMs running on the server. Using the eSwitch will gain higher performance levels in addition to security and QoS. The eSwitch configuration is transparent to the Red Hat Enterprise Linux OpenStack Platform administrator by using the Mellanox neutron plugin. By implementing a technology called SR-IOV (Single Root IO Virtualization) and running RDMA over eSwitch, we were able to show a dramatic difference (x20) compared to when using para-virtualized vNIC running a TCP stream connectivity.

Eli Blog 092013 Img2

Learn more:

Mellanox and Red Hat OpenStack joint solution - click here

View the Mellanox certificationclick here

eli karpilovski
Author: Eli Karpilovski manages the Cloud Market Development at Mellanox Technologies. In addition, Mr. Karpilovski serves as the Cloud Advisory Council Chairman. Mr. Karpilovski served as product manager for the HCA Software division at Mellanox Technologies. Mr. Karpilovski holds a Bachelor of Science in Engineering from the Holon Institute of Technology and a Master of Business Administration from The Open University of Israel.

ConnectX-3 Leverages Network Services in SDN Era

Guest blog by: Alon Harel

 

If your job is related to networking, be it a network admin, an R&D engineer, an architect, or any other job involving networks, it is very likely you have heard people around you (or GASP! maybe even heard yourself) express doubts about the proliferation of Software Defined Networking (SDN) and OpenFlow. How many times have you encountered skepticism about this new revolutionary concept of decoupling control and data planes and “re-inventing the wheel”? Many people used to think “this is hype; it will go away like other new technologies did, and it will never replace the traditional network protocols…” Well, if you perceive SDN/OpenFlow only as a replacement for the current network distributed protocol, these doubts may be turn out to be valid. The concept of saying “OpenFlow is here to replace the old strict protocols” is pretty much the message one gets from reading the old white papers regarding OpenFlow. These papers used to describe the primary motivation for moving to OpenFlow as the determination to introduce innovation in the control plane (that is, the ability to test and apply new forwarding schemes in the network).

 

This long preface is the background for the use case we present below. This use case is not about a new forwarding scheme, nor is it about re-implementing protocols; rather, it is a complementary solution for existing traditional networks. It is about adding network services in an agile way, allowing cost-efficient scalability. It is innovative and fresh and, most importantly, it could have not been done prior to the SDN era. Its simplicity and the fact that it relies on some very basic notions of OpenFlow can only spark the imagination about what can be done further using the SDN toolbox.

 

RADWARE’s security appliance, powered by Mellanox’s OpenFlow-enabled ConnectX®-3 adapter, brings a new value proposition to the network appliance market, demonstrating the power of SDN by enabling the addition of network services in a most efficient and scalable way.

 

Security and attack mitigation service is applied for pre-defined protected objects (servers) identified by their IP address. Prior to SDN, the security appliance had to be a ‘bump in the wire’ because all traffic destined for the protected objects must traverse through it. This, of course, dictates network physical topology, limited by the appliance’s port bandwidth and imposing high complexity when scale comes into play.

 

RADWARE’s DefenseFlow software is capable of identifying abnormal network behavior by monitoring the amount of bytes and packets of specific flows destined for the protected objects. The monitoring is performed by installing specific flows in the forwarding hardware only for the sake of counting the amount of data traversing it. Flow configuration and counter information is retrieved via standard OpenFlow primitives. The naïve approach would be to use the OpenFlow switches to accommodate the flows (counters); however, the limited resource capacity of commodity switches (mainly TCAM, which is the prime resource for OpenFlow) rules out this option. (Note that a switch may be the data path for hundreds or thousands of VMs, each with several monitored flows). Thus, the viability of the solution must come from somewhere else. Enter Mellanox’s OpenFlow-enabled ConnectX-3 SR-IOV adapter.

 

ConnectX-3 incorporates an embedded switch (or eSwitch) enabling VM communication to enjoy bare metal performance. The HCA driver includes OpenFlow agent software, based on the Indigo-2 open source project, which enables the eSwitch to be controlled using standard OpenFlow protocol.

 

Installing the flows (counters) on the edge switch (eSwitch) makes a lot of sense. First, each eSwitch is responsible only for a relatively small amount of protected objects (only those servers running on a specific host), therefore the scale obstacle becomes a non-issue. Moreover, more clever or sophisticated monitoring (for example, event generation when a threshold is crossed) can easily be added, offloading the monitoring application (DefenseFlow in this case).

 

You might think, “What’s new about that? We already have Open vSwitch (OVS) on the server which is OpenFlow capable.” Well, when performance is the name of the game, OVS is out and SR-IOV technology is in. While in SR-IOV mode, VM communication is performed by interfacing the hardware, directly bypassing any virtual switch processing software; therefore, in this mode OVS’s OpenFlow capabilities cannot be used (as it is not part of the data path).

 

Let’s take a look at this practically by describing the setup and operation of the joint solution. The setup is based on standard servers equipped with Mellanox’s ConnectX-3 adapter and OpenFlow-enabled switch and with RADWARE’s DefensePro appliance and DefenseFlow software, which interacts with the Floodlight OpenFlow controller.

SDN bog iamge1.png

Figure 1 – Setup

 

Here’s a description of the joint solution operation, as depicted in Figure 2:

  • DefenseFlow installs the relevant flows on each ConnectX-3 adapter.
  • The security appliance does not participate in the normal data path.
  • ConnectX-3 counts traffic matching the installed flows.
  • Flow counters are retrieved from ConnectX-3.
  • Once an attack is identified, only relevant traffic is diverted to the security appliance (where it is cleared of malicious flows and inserted back toward its destination).

 

 

SDN bog iamge2.png

Figure 2 -Joint Solution

 

I would argue that every skeptic seeing this example use case and the added value it brings to existing network environments using these very basic OpenFlow knobs, would have to reconsider his SDN doubts…

Mellanox Joins the Network Intelligence Alliance

We are happy to join the Network Intelligence Alliance, an industry organization created for collaboration among the Network Economy’s technology providers. Through our participation in the Alliance, Mellanox will help develop and market innovative solutions that further improve networking solutions for Enterprise, Cloud providers and Telecom Operators.

 

Using Mellanox’s low-latency, CPU efficient 10/40GbE NICs and switches, customers can deploy an embedded virtual switch (eSwitch) to run virtual machine traffic with bare-metal performance, provide hardened security and QoS, all with simpler management through Software Defined Networking (SDN) and OpenFlow APIs. The hardware-based security and isolation features in our 10/40GbE solutions can enable wider adoption of multi-tenant clouds while maintaining user service level agreements (SLA). In addition, utilizing SR-IOV to bypass the Hypervisor, customers will gain more VMs when virtualizing Network Functions on their cloud and data center server and storage infrastructure deployments.

 

In a world that now depends and runs on networks—accurate visibility and precise tracking of data crossing networks have become crucial to the availability, performance and security of applications and services. The growing complexity of IP transactions, the explosion of mobile applications, and the mainstream adoption of cloud computing surpass the capabilities of conventional tools to improve how networks operate, expand services, and cope with cybersecurity.  Just like Business Intelligence solutions emerged to unlock information hidden in the enterprise, Network Intelligence technology is an emerging category of technology to reveal the critical details of the data locked inside network traffic and transactions.
Mellanox is excited to be a part of this great group and we are looking forward to collaborating with other members.

http://www.mellanox.com/

Mellanox encourages you to join our community and follow us on; LinkedIn, Mellanox Blog, Twitter, YouTube, Mellanox Community

How did Windows Azure achieved performance of 90.2 percent efficiency

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

Windows Azure, one of the largest public cloud providers in the world today, recently ran a system performance benchmark, called LINPACK, to demonstrate the performance capabilities of its ‘Big Compute’ hardware. Windows Azure submitted the results and was certified as one of the world’s largest supercomputers on the TOP500.

 

Results were super impressive – 151.3 TFlops on 8,065 cores with 90.2 percent efficiency, 33% higher efficiency versus other major 10GbE cloud providers that ran the same benchmark!

 

What is their secret? 40Gb/s InfiniBand network with RDMA – the Mellanox way.

 

Learn more about it >>  (http://blogs.msdn.com/b/windowsazure/archive/2012/11/13/windows-azure-benchmarks-show-top-performance-for-big-compute.aspx)

 

Join the Mellanox Cloud Community: http://community.mellanox.com/groups/cloud

Why Atlantic.Net Chose Mellanox

Atlantic.Net is a global cloud hosting provider that offers Atlantic.Net can now offer customers more robust cloud hosting services through a reliable, adaptable infrastructure, all at a lower cost in comparison to traditional interconnect solutions.

Why Atlantic.Net Chose Mellanox

  • Price and Cost Advantage

Expensive hardware, overhead costs while scaling, as well as administrative costs can be avoided with Mellanox’s interconnect technologies and thereby reduce costs 32% per application.

  • Lower Latency and Faster Storage Access:

By utilizing the iSCSI RDMA Protocol (iSER) implemented in KVM servers over a single converged InfiniBand interconnect adapter, iSER delivers lower latency and is less complex, resulting in lower costs to the user.

  • Consolidate I/O Transparently

LAN and SAN connectivity for VMs on KVM and Atlantic.Net’s management environment is tightly integrated; allowing Atlantic.Net to transparently consolidate LAN, SAN, live migrations and other traffic.

The Bottom Line

By deploying Mellanox’s InfiniBand solution, Atlantic.Net can support high volume and high-performance requirements– on-demand – and offer a service that scales as customers’ needs change and grow. Having built a high performance, reliable and redundant storage infrastructure using off-the-shelf commodity hardware, Atlantic.Net was able to avoid purchasing expensive Fibre Channel storage arrays, saving significant capital expenses per storage system.

 

http://youtu.be/frTWWwjacyc

The Promise of an End-To-End SDN Solution, can it be done?

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

The new open source cloud orchestration platform called OpenStack is the promise of flexible network virtualization, and network overlays are looking closer than ever. The vision of this platform is to enable the on-demand creation of many distinct networks on top of one underlying physical infrastructure in the cloud environment. The platform will support automated provisioning and management of large groups of virtual machines or compute resources, including extensive monitoring in the cloud.

 

There is still a lot of work to be done, as there are many concerns around the efficiency and simplicity of the management solution for the compute and storage resources. A mature solution will need to incorporate different approaches to interact within the intra-server provisioning, QoS and vNIC management. For example, by leaning on local network adapters that are capable of managing the requests utilizing OpenFlow protocol, or by using a more standard approach which is managed by the switch. Using only one method, might create performance and efficiency penalties.

 

Learn how Mellanox’s OpenStack solution offloads the orchestration platform from the management of individual networking elements, with the end-goal of simplifying operations of large-scale, complex infrastructures www.mellanox.com/openstack

 

Have Questions, Join our Cloud Community Today!

Why I left HP after 19 years to join ProfitBricks

On 02.12.13, In Cloud Computing, by

Pete Johnson, new Platform Evangelist

Woz once said, “I thought I’d be an HPer for life.” While I don’t usually claim to have a whole lot in common with the man who designed the first computer I ever saw (an Apple II, summer ’78), in this instance it’s true. As it turns out, we were both wrong.

Pete Johnson, new Platform Evangelist for ProfitBricks

I stayed at HP as long as I did for lots of reasons. Business model diversity is one:  over the last two decades, I was lucky enough to be a front line coder, a tech lead, a project manager, and an enterprise architect while working on web sites for enterprise support, consumer ecommerce sales, enterprise online sales, all forms of marketing, and even post-sales printing press supplies reordering.   Most recently I was employee #37 for HP’s new public cloud offering where I performed a lot of roles including project management of web development teams, customer facing demonstrations at trade shows, and sales pitches for Fortune 500 CIOs.  But I also remained at HP because of the culture and values that came straight from Bill Hewlett and Dave Packard, which my early mentors instilled in me. You can still find those values there today if you look hard enough, and if anybody gets that, Meg Whitman does.

Why leave HP for ProfitBricks then?

So if I still have such a rosy view of HP, despite recent bumpiness, why did I leave to become the Platform Evangelist for ProfitBricks?

Three reasons:

  1. InfiniBand
  2. InfiniBand
  3. InfiniBand

If you are anything like the sample of computer industry veterans I told about my move last week, you just said, “What the heck is InfiniBand?” Let me explain what it is and why it is poised to fundamentally change the cloud computing.

Ethernet is the dominant network technology used in data centers today. Originally created during the Carter administration, it uses a hierarchical structure of LAN segments, which ultimately means that packets have exactly one path to traverse when moving from point A to point B anywhere in the network. InfiniBand, which is a popular 21st century technology in the supercomputing and high-performance computing (HPC) communities, uses a grid or mesh system that gives packets multiple paths from point A to point B. This key difference, among other nuances, gives InfiniBand a top speed of 80 Gbits/sec, resulting in a speed that is 80x faster than Amazon’s AWS 1Gbit/sec standard Ethernet connections.

What’s the big deal about InfiniBand?

“So what?” you may be thinking. “A faster cloud network is nice, but it doesn’t seem like THAT big a deal.”

Actually, it is a VERY big deal when you stop and think about how a cloud computing provider can take advantage of a network like this.

As founder and CMO Andreas Gauger put it to me during the interview process, virtualization is a game of Tetris in which you are trying to fit various sizes of Virtual Machines on top of physical hardware to maximize utilization. This is particularly critical for a public cloud provider. With InfiniBand, Profit Bricks can rearrange the pieces, and at 80 Gbits/sec, our hyper-visor can move a VM from one physical machine to another without the VM ever knowing. This helps us maximize the physical hardware and keep prices competitive, but it also means two other things for our customers:

  • You can provision any combination of CPU cores and RAM you want, up to and including the size of the full physical hardware we use
  • You can change the number of CPU cores or amount of RAM on-the-fly, live, without rebooting the VM

In a world where other public cloud providers force you into cookie cutter VM sizes in an attempt to simplify the game of Tetris for themselves, the first feature is obviously differentiating. But when most people hear the second one, their reaction is that it can’t possibly be true — it must be a lie. You can’t change virtual hardware on a VM without rebooting it, can you?

No way you can change CPU or RAM without rebooting a VM!

Do you suppose I’d check that out before leaving the only employer I’ve ever known in my adult life?

I spun up a VM, installed Apache, launched a load test from my desktop against the web server I just created, changed both the CPU Cores and RAM on the server instance, confirmed the change at the VM command line, and allowed the load test to end.  You know what the load test log showed?

Number of errors: 0.

The Apache web server never went down, despite the virtual hardware change, and handled HTTP requests every 40 milliseconds. I never even lost my remote login session. Whoa.

But wait, there’s more (and more to come)

Throw in the fact that the ProfitBricks block storage platform takes advantage of InfiBand to not only provide RAID 10 redundancy, but RAID 10 mirrored across two availability zones, and I was completely sold.  I realized that ProfitBricks founder, CTO, and CEO Achim Weiss took the data center efficiency knowledge that gave 1&1 a tremendous price advantage and combined it with supercomputing technology to create a cloud computing game-changer that his engineering team is just beginning to tap into. I can’t wait to see what they do with object storage, databases, and everything else that you’d expect from a fully IaaS offering. I had to be a part of that.

Simply put: ProfitBricks uses InfiniBand to enable Cloud Computing 2.0.

And that’s why, after 19 years, I left HP.

RDMA – Cloud providers “secret sauce”

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

With expansive growth expected in the cloud-computing market, some researches expects the market will grow from $70.1 billion in 2012 to $158.8 billion in 2014 – cloud service providers must find ways to provide increasingly sustainable performance. At the same time, they must accommodate an increasing number of internet users, whose expectations about improved and consistent response times are growing.

 

However, service providers cannot increase performance if the corresponding cost also rises. What these providers need is a way to deliver low latency, fast response, and increasing performance while minimizing the cost of the network.

 

One good example to accomplish that is RDMA, Traditionally centralized storage was either slow or created bottlenecks and deemphasized the need for fast storage networks. With the advent of fast solid state devices, we are seeing a need for a VERY fast and converged network, to leverage the capabilities that is been offered, in particular, we are starting to see cloud arch using RDMA based storage appliances to accelerate access storage time, reduce latency and achieve the best CPU utilization on the end point.

 

To learn more about the usage of RDMA in providing cloud infrastructure requirements for meeting performance, availability and agility needs, now and in the future check the following link.

 

Mellanox- InfiniBand makes headway in the cloud – YouTube

Partners Healthcare Cuts Latency of Cloud-based Storage Solution Using Mellanox InfiniBand Technology

Interesting article just came out from Dave Raffo at SearchStorage.com. I have a quick summary below but you should certainly read the full article here: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners recognized early on that a Cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners network.

Initially, Partners Healthcare chose Ethernet as the network transport technology. As demand grew the solution began hitting significant performance bottlenecks, particularly during read/write of 100’s of thousands of small files. The issue was found to lie with the interconnect—Ethernet created problems due to its high natural latency. In order to provide a scalable low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners experienced roughly two orders of magnitude faster read times. “One user had over 1,000 files, but only took up 100 gigs or so,”said Brent Richter corporate manager for enterprise research infrastructure and services, Partners HealthCare System.”Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” he said.

Richter said the final price tag came to about $1 per gigabyte.

By integrating Mellanox InfiniBand into the storage solution, Partners Healthcare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Till next time,

Brian Sparks

Sr. Director, Marketing Communication

Thanks for coming to see us at VMworld

VMworld was everything we expected and more. The traffic was tremendous and we had a lot of excitement and buzz in our booth (especially after we won the Best of VMworld in the Cloud Computing category). Just in case you were unable to sit through one of Mellanox’s presentations, or from one of our partners (Xsigo, HP, Intalio, RNA Networks, and OpenFabrics Alliance), we went ahead and video taped the sessions, and have posted them below.

 

 Mellanox – F.U.E.L. Efficient Virtualized Data Centers

 

 Mellanox – On-Demand Network Services

 

 Intalio – Private Cloud Platform

 

 HP BladeSystem and ExSO SL-Series

 

 Xsigo – How to Unleash vSphere’s Full Potential with Xsigo Virtual I/O

 

 RNA Networks – Virtual Memory

 

 OpenFabrics Alliance – All things Virtual with OpenFabrics and IB