Monthly Archives: February 2009

Gain A Competitive Advantage

BridgeX received an excellent response from all the analysts that we briefed over the last few weeks. 

 One article talked about how BridgeX reminded the author of the early days of networking when networking companies delivered bridges for Ethernet, Token Ring and Banyan Vines.  The other one talked about the mish-mosh of protocols in the data center as a familiar story.   

 In my opinion, when data centers moved from Fast Ethernet to Gigabit Ethernet it was an easy decision because of the 10x performance improvements that were necessitated by the growth in Internet applications. The same 10x performance is now available with 10 Gigabit Ethernet but the data centers have not jumped into deploying the technology yet. Why?  The killer-app for 10 Gigabit Ethernet is I/O consolidation but the Ethernet protocol itself is still being enhanced in order for it to be deployed as an I/O consolidation fabric. Enhancements to the Ethernet protocol are being made within the IEEE Data Center Bridging Workgroup.  These enhancements will deliver new functionalities to Ethernet, yet the timeline for products is still a BIG question mark. Normally, in a growth economy, products will roll out within 12 to 18 months of spec finalization, whereas in the current economic condition it might taker a longer time and the spec is at least 18 months away for finalization. Till then,10 Gigabit Ethernet deployments will happen in data centers in smaller, niche applications and will not be deployed for I/O consolidation. So, if data centers want to save energy costs, reduce floor space and lower TCO today, then deploying a proven I/O consolidation fabric is critical. 

Just some of the enhancements currently being made to the Ethernet protocol in the IEEE:

  1. Lossless fabric
  2. Creating Virtual Lanes and providing granular QoS
  3. Enabling Fat-Tree
  4. Congestion management

These are already part of the InfiniBand fabric which has been shipping for almost 9 years now, and has been successfully deployed in several data centers and high-performance commercial applications.

Oracle Exadata is a great product that drives InfiniBand to the forefront of data centers for database applications. Exadata brings in new thinking and new strategy for delivering higher I/Os and lowering energy costs. Exadata certainly delivers a competitive advantage. 

Similarly, BridgeX coupled with ConnectX adapters and InfiniScale switching platforms provides competitive advantages by delivering a cost-optimized,I/O consolidation fabric. Data centers can consolidate their I/O using InfiniBand as the physical fabric and the virtual fabric will continue to be Ethernet or Fibre Channel. This means that the applications that need an Ethernet transport or a Fibre Channel transport will run un-modified in the InfiniBand cloud.   

I think it is time for the data centers to take a new look at their infrastructure and re-strategize the investments to gain an even greater competitive advantage. When the economy turns around, those who have infrastructure that can leapfrog their competition will eventually win.

TA Ramanujam (TAR)
tar@mellanox.com

Mellanox at VMworld Europe

Yesterday, myself along with Motti Beck and Ali Ayoub (our main VMware software developer at Mellanox) diligently put together a very compelling demo that highlights the convergence capabilities of our BridgeX BX 4000 gateway that we announced last week.

We unpacked everything and got it all up and running in less than an hour (this after we sorted out the usual power and logistical issues that always comes with having a booth).

 

 
The slide below illustrates the topology of the demo. Essentially, we have two ConnectX adapters cards in one of the Dell server running two different interconnect fabrics. One adapter is running 40Gb/s InfiniBand, while the other adapter is running 10 Gigabit Ethernet.

1. The 40Gb/s InfiniBand adapter is connected to our MTS3600 40Gb/s InfiniBand switch which then passes through the BridgeX BX4020 where we convert the packets to Ethernet. The packets then run through the Arista 10GigE Switch and then into the LeftHand Appliance Virtual Machine which resides on the Dell Server (which is running ESX 3.5 and our certified 10GigE driver over our ConnectX EN 10GigE SFP+ adapter). We are showing a movie from the iSCSI storage on the IB end-point (the Dell Linux Server).

2. The 10 Gigabit Ethernet Adapter connects directly to the BridgeX BX4020 where it converts the traffic to FC (effectively FCoE). The traffic then moves to the Brocade Fibre Channel switch and then directly to the NetApp storage. We are showing a movie from the FC NetApp storage on the 10GigE end-point (the Dell Linux Server).

If you are coming to VMorld Europe (or already here) come and see us in Booth #100 and we will be happy to walk you through the demo.

Brian Sparks
Director, Marketing Communications
brian@mellanox.com

I/O Agnostic Fabric Consolidation

Today, we announced one of the most innovative and strategic product – BridgeX, an I/O agnostic fabric consolidation silicon and you drop that in a 1U enclosure it becomes a full fledged system (BX4000)

Few years back we defined our product strategy to deliver a single-wire I/O consolidation to data centers.  The approach was not to support some random transports to deliver I/O consolidation but use transports that the data centers are accustomed to for the smooth running of their businesses.  ConnectX, an offspring of this strategy supports InfiniBand, Ethernet and FCoE.    ConnectX consolidates the I/O on the adapter but the data still has to go through different access switches.   BridgeX, the second offspring of our product strategy supports a stateless gateway functionality which allows for access layer consolidation.   BridgeX provides the Data Centers to innovate and remove two fabrics by deploying a single InfiniBand fabric which can support several virtualized GigE’s, 10GigE’s, 2, 4 or 8Gig FC in a single physical server.  BridgeX with its software counterpart BridgeX Manager that runs alongside on a CPU delivers management functionality for vNICs and vHBAs for both virtual OS (VMWare, XEN, Hyper-V) and non-virtual OS’s (Linux and Windows).

Virtual I/Os and BridgeX a stateless gateway implementation provides packet / frame integrity.  Virtual I/O drivers on the host adds InfiniBand headers to the Ethernet or Fibre Channel frames and the gateway (BridgeX) removes the headers and delivers it on the appropriate LAN or SAN port.  Similarly, the gateway (BridgeX) adds the InfiniBand headers to the packets / frames that it receives from the LAN / SAN side and sends it to the host which removes the encapsulation and delivers packet / frame to the application.  This simple, easy, and innovative implementation saves not only deployment costs but also saves energy and cooling costs significantly.

We briefed several analysts the last few weeks and most of them concurred that the product is innovative and in times like this a BridgeX based solution can cut costs, speed-up deployments and improve performance.

TA Ramanujam (TAR)
tar@mellanox.com

Performance Testing 29West LBM

As promised in my last blog post (over two weeks ago), this post will focus on results from a more financial market-related application. The results below come from testing performed with 29West LBM 3.3.9.

29West LBM offers topic-based Publish/Subscribe semantics without a central server. Its primary design goal is to minimize latency. Many end-users and middleware providers incorporate LBM into their own software via the LBM API. The paradigm being used is a Publisher/subscriber which is an asynchronous messaging paradigm where senders (publishers) of messages are not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into classes, without knowledge of what (if any) subscribers there may be. Subscribers express interest in one or more classes, and only receive messages that are of interest, without knowledge of what (if any) publishers there are.

We’ve conducted the testing with 2 servers – full set-up details on the hardware side was, as always, the default, out-of-the-box EDC testing cluster that we’ve all experienced and learned from during the first set of blog posts. Using 29West LBM we’ve used 2 separate test runs: Lbmpong – for latency and Lbmsrc/lbmrcv – for msg rate. For the 2 tests, we’re using the following interconnects: GigE, Mellanox VPI 10GigE, and Mellanox VPI 40Gb/s InfiniBand.

When using InfiniBand we’ve used 3 different Upper-Layers-Protocols (ULPs), which didn’t require any code intervention; IPoIB connect-mode (CM), IPoIB datagram mode (UD) and Socket-Direct-Protocol (SDP).

Unlike the hardware, which would not change, it is important to note the software versions used may change due to regular official software release updates, and since we’re using only off-the-shelf releases, this may change. The Mellanox ConnectX VPI Firmware version is 2.6.0 and OFED (Driver) Version is 1.4, all running on RHEL 5up2 as the OS.

We theoretically knew the results of the 40Gb/s InfiniBand would be better, but didn’t estimate the difference correctly. 10GigE and InfiniBand are better then GigE in the following order (from high to low): SDP, IPoIB Connected, IPoIB Datagram (up to 8KB) and 10GigE In latency from 30-80% in msg rate, in msg size bigger then 1kb, from 200-450%.

 

you can download the full results here.

In the next couple of weeks I will be traveling to Singapore to speak at the IDC FinTech conference. Look me up if you plan to attend. If I a not able to post anther blog before that, I will make sure to eat the famous Singapore chili-crab for my readers and I will make sure to tell you how it was… I meant the conference as well, not only the crab

Nimrod Gindi

nimrodg@mellanox.com