Monthly Archives: May 2013

ConnectX-3 Leverages Network Services in SDN Era

Guest blog by: Alon Harel

 

If your job is related to networking, be it a network admin, an R&D engineer, an architect, or any other job involving networks, it is very likely you have heard people around you (or GASP! maybe even heard yourself) express doubts about the proliferation of Software Defined Networking (SDN) and OpenFlow. How many times have you encountered skepticism about this new revolutionary concept of decoupling control and data planes and “re-inventing the wheel”? Many people used to think “this is hype; it will go away like other new technologies did, and it will never replace the traditional network protocols…” Well, if you perceive SDN/OpenFlow only as a replacement for the current network distributed protocol, these doubts may be turn out to be valid. The concept of saying “OpenFlow is here to replace the old strict protocols” is pretty much the message one gets from reading the old white papers regarding OpenFlow. These papers used to describe the primary motivation for moving to OpenFlow as the determination to introduce innovation in the control plane (that is, the ability to test and apply new forwarding schemes in the network).

 

This long preface is the background for the use case we present below. This use case is not about a new forwarding scheme, nor is it about re-implementing protocols; rather, it is a complementary solution for existing traditional networks. It is about adding network services in an agile way, allowing cost-efficient scalability. It is innovative and fresh and, most importantly, it could have not been done prior to the SDN era. Its simplicity and the fact that it relies on some very basic notions of OpenFlow can only spark the imagination about what can be done further using the SDN toolbox.

 

RADWARE’s security appliance, powered by Mellanox’s OpenFlow-enabled ConnectX®-3 adapter, brings a new value proposition to the network appliance market, demonstrating the power of SDN by enabling the addition of network services in a most efficient and scalable way.

 

Security and attack mitigation service is applied for pre-defined protected objects (servers) identified by their IP address. Prior to SDN, the security appliance had to be a ‘bump in the wire’ because all traffic destined for the protected objects must traverse through it. This, of course, dictates network physical topology, limited by the appliance’s port bandwidth and imposing high complexity when scale comes into play.

 

RADWARE’s DefenseFlow software is capable of identifying abnormal network behavior by monitoring the amount of bytes and packets of specific flows destined for the protected objects. The monitoring is performed by installing specific flows in the forwarding hardware only for the sake of counting the amount of data traversing it. Flow configuration and counter information is retrieved via standard OpenFlow primitives. The naïve approach would be to use the OpenFlow switches to accommodate the flows (counters); however, the limited resource capacity of commodity switches (mainly TCAM, which is the prime resource for OpenFlow) rules out this option. (Note that a switch may be the data path for hundreds or thousands of VMs, each with several monitored flows). Thus, the viability of the solution must come from somewhere else. Enter Mellanox’s OpenFlow-enabled ConnectX-3 SR-IOV adapter.

 

ConnectX-3 incorporates an embedded switch (or eSwitch) enabling VM communication to enjoy bare metal performance. The HCA driver includes OpenFlow agent software, based on the Indigo-2 open source project, which enables the eSwitch to be controlled using standard OpenFlow protocol.

 

Installing the flows (counters) on the edge switch (eSwitch) makes a lot of sense. First, each eSwitch is responsible only for a relatively small amount of protected objects (only those servers running on a specific host), therefore the scale obstacle becomes a non-issue. Moreover, more clever or sophisticated monitoring (for example, event generation when a threshold is crossed) can easily be added, offloading the monitoring application (DefenseFlow in this case).

 

You might think, “What’s new about that? We already have Open vSwitch (OVS) on the server which is OpenFlow capable.” Well, when performance is the name of the game, OVS is out and SR-IOV technology is in. While in SR-IOV mode, VM communication is performed by interfacing the hardware, directly bypassing any virtual switch processing software; therefore, in this mode OVS’s OpenFlow capabilities cannot be used (as it is not part of the data path).

 

Let’s take a look at this practically by describing the setup and operation of the joint solution. The setup is based on standard servers equipped with Mellanox’s ConnectX-3 adapter and OpenFlow-enabled switch and with RADWARE’s DefensePro appliance and DefenseFlow software, which interacts with the Floodlight OpenFlow controller.

SDN bog iamge1.png

Figure 1 – Setup

 

Here’s a description of the joint solution operation, as depicted in Figure 2:

  • DefenseFlow installs the relevant flows on each ConnectX-3 adapter.
  • The security appliance does not participate in the normal data path.
  • ConnectX-3 counts traffic matching the installed flows.
  • Flow counters are retrieved from ConnectX-3.
  • Once an attack is identified, only relevant traffic is diverted to the security appliance (where it is cleared of malicious flows and inserted back toward its destination).

 

 

SDN bog iamge2.png

Figure 2 -Joint Solution

 

I would argue that every skeptic seeing this example use case and the added value it brings to existing network environments using these very basic OpenFlow knobs, would have to reconsider his SDN doubts…

UF launches HiPerGator, the state’s most powerful supercomputer

GAINESVILLE, Fla. — The University of Florida today unveiled the state’s most powerful supercomputer, a machine that will help researchers find life-saving drugs, make decades-long weather forecasts and improve armor for troops.

The HiPerGator supercomputer and recent tenfold increase in the size of the university’s data pipeline make UF one of the nation’s leading public universities in research computing.

“If we expect our researchers to be at the forefront of their fields, we need to make sure they have the most powerful tools available to science, and HiPerGator is one of those tools,” UF President Bernie Machen said. “The computer removes the physical limitations on what scientists and engineers can discover. It frees them to follow their imaginations wherever they lead.”

For UF immunologist David Ostrov, HiPerGator will slash a months-long test to identify safe drugs to a single eight-hour work day.

“HiPerGator can help get drugs get from the computer to the clinic more quickly. We want to discover and deliver safe, effective therapies that protect or restore people’s health as soon as we can,” Ostrov said. “UF’s supercomputer will allow me to spend my time on research instead of computing.”

The Dell machine has a peak speed of 150 trillion calculations per second. Put another way, if each calculation were a word in a book, HiPerGator could read the millions of volumes in UF libraries several hundred times per second.

UF worked with Dell, Terascala, Mellanox and AMD to build a machine that makes supercomputing power available to all UF faculty and their collaborators and spreads HiPerGator’s computing power over multiple simultaneous jobs instead of focused on a single task at warp speed.

HiPerGator features the latest in high-performance computing technology from Dell and AMD with 16,384 processing cores; a Dell|Terascala HPC Storage Solution (DT-HSS 4.5) with the industry’s fastest open-source parallel file system; and Mellanox’s FDR 56Gb/s InfiniBand interconnects that provide the highest  bandwidth and lowest latency.  Together these features provide UF researchers unprecedented computation and faster access to data to quickly further their research.

UF unveiled HiPerGator on Tuesday as part of a ribbon-cutting ceremony for the 25,000-square-foot UF Data Center built to house it. HiPerGator was purchased and assembled for $3.4 million, and the Data Center was built for $15 million.

Also today, the university announced that it is the first in the nation to fully implement the Internet2 Innovation Platform, a combination of new technologies and services that will further speed research computing.