Category Archives: Data Center

Performance Testing 29West LBM

As promised in my last blog post (over two weeks ago), this post will focus on results from a more financial market-related application. The results below come from testing performed with 29West LBM 3.3.9.

29West LBM offers topic-based Publish/Subscribe semantics without a central server. Its primary design goal is to minimize latency. Many end-users and middleware providers incorporate LBM into their own software via the LBM API. The paradigm being used is a Publisher/subscriber which is an asynchronous messaging paradigm where senders (publishers) of messages are not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into classes, without knowledge of what (if any) subscribers there may be. Subscribers express interest in one or more classes, and only receive messages that are of interest, without knowledge of what (if any) publishers there are.

We’ve conducted the testing with 2 servers – full set-up details on the hardware side was, as always, the default, out-of-the-box EDC testing cluster that we’ve all experienced and learned from during the first set of blog posts. Using 29West LBM we’ve used 2 separate test runs: Lbmpong – for latency and Lbmsrc/lbmrcv – for msg rate. For the 2 tests, we’re using the following interconnects: GigE, Mellanox VPI 10GigE, and Mellanox VPI 40Gb/s InfiniBand.

When using InfiniBand we’ve used 3 different Upper-Layers-Protocols (ULPs), which didn’t require any code intervention; IPoIB connect-mode (CM), IPoIB datagram mode (UD) and Socket-Direct-Protocol (SDP).

Unlike the hardware, which would not change, it is important to note the software versions used may change due to regular official software release updates, and since we’re using only off-the-shelf releases, this may change. The Mellanox ConnectX VPI Firmware version is 2.6.0 and OFED (Driver) Version is 1.4, all running on RHEL 5up2 as the OS.

We theoretically knew the results of the 40Gb/s InfiniBand would be better, but didn’t estimate the difference correctly. 10GigE and InfiniBand are better then GigE in the following order (from high to low): SDP, IPoIB Connected, IPoIB Datagram (up to 8KB) and 10GigE In latency from 30-80% in msg rate, in msg size bigger then 1kb, from 200-450%.

 

you can download the full results here.

In the next couple of weeks I will be traveling to Singapore to speak at the IDC FinTech conference. Look me up if you plan to attend. If I a not able to post anther blog before that, I will make sure to eat the famous Singapore chili-crab for my readers and I will make sure to tell you how it was… I meant the conference as well, not only the crab

Nimrod Gindi

nimrodg@mellanox.com

Chilean Stock Exchange Streamlines Securities Transactions With IBM

In case you missed it, IBM recently made an announcement regarding their WebSphere MQ Low Latency Messaging running over native InfiniBand enabled Blade Servers.

The performance the Chiliean Stock Exchange is seeing is really impressive – 3000 orders per second with latency reduced by 100x of its current level. Latency performance is very critical for the financial markets, and InfiniBand is certainly showing it is the preferred data center connectivity platform of choice.


Motti Beck
Motti@mellanox.com

Enterprise Data Center – Where do we start?

From my experience of working with enterprise market users, I’ve learned that regardless of the fact that everyone uses similar building blocks for their data center, with similar requirements, there is a great concentration on the application which creates endless diversification in the deployment and need of application centric concrete data for CIOs to make a decision.

When moving back to our HQ earlier this year I was challenged on how to provide that information fast and effective.

Together with some of our marketing and architecture organizations individuals, the idea to “become an end-user” came up. Easier said than done…How does an engineering driven vendor do that?

I’ve targeted taking off-the-shelf components that typically compose enterprise data-centers to provide a complete solution and have them tested to provide the end-users some basic data points to consider (without/before any specific changes or tuning performed).

Continue reading