Performance Testing 29West LBM

10 Gigabit Ethernet, Data Center, InfiniBand

As promised in my last blog post (over two weeks ago), this post will focus on results from a more financial market-related application. The results below come from testing performed with 29West LBM 3.3.9.

29West LBM offers topic-based Publish/Subscribe semantics without a central server. Its primary design goal is to minimize latency. Many end-users and middleware providers incorporate LBM into their own software via the LBM API. The paradigm being used is a Publisher/subscriber which is an asynchronous messaging paradigm where senders (publishers) of messages are not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into classes, without knowledge of what (if any) subscribers there may be. Subscribers express interest in one or more classes, and only receive messages that are of interest, without knowledge of what (if any) publishers there are.

We’ve conducted the testing with 2 servers – full set-up details on the hardware side was, as always, the default, out-of-the-box EDC testing cluster that we’ve all experienced and learned from during the first set of blog posts. Using 29West LBM we’ve used 2 separate test runs: Lbmpong – for latency and Lbmsrc/lbmrcv – for msg rate. For the 2 tests, we’re using the following interconnects: GigE, Mellanox VPI 10GigE, and Mellanox VPI 40Gb/s InfiniBand.

When using InfiniBand we’ve used 3 different Upper-Layers-Protocols (ULPs), which didn’t require any code intervention; IPoIB connect-mode (CM), IPoIB datagram mode (UD) and Socket-Direct-Protocol (SDP).

Unlike the hardware, which would not change, it is important to note the software versions used may change due to regular official software release updates, and since we’re using only off-the-shelf releases, this may change. The Mellanox ConnectX VPI Firmware version is 2.6.0 and OFED (Driver) Version is 1.4, all running on RHEL 5up2 as the OS.

We theoretically knew the results of the 40Gb/s InfiniBand would be better, but didn’t estimate the difference correctly. 10GigE and InfiniBand are better then GigE in the following order (from high to low): SDP, IPoIB Connected, IPoIB Datagram (up to 8KB) and 10GigE In latency from 30-80% in msg rate, in msg size bigger then 1kb, from 200-450%.


you can download the full results here.

In the next couple of weeks I will be traveling to Singapore to speak at the IDC FinTech conference. Look me up if you plan to attend. If I a not able to post anther blog before that, I will make sure to eat the famous Singapore chili-crab for my readers and I will make sure to tell you how it was… I meant the conference as well, not only the crab

Nimrod Gindi

Comments are closed.