QuickTransit Performance Results

As previously suggested, I will review in this post a different application that is focused on converting protocols. QuickTransit, developed by a company called Transitive (recently acquired by IBM), is a cross-platform virtualization technology which allows applications that have been compiled for one operating system and processor to run on servers that use a different processors and operating systems, without requiring any source code or binary changes.

We are using: QuickTransit for Solaris/SPARC-to-Linux/x86-64 which we used to test for Latency by a basic test which was related to the financial-industry operating method and involves interconnect between servers performance.

The Topology we’ve used was 2 servers (the 1st acting as server and the 2nd as a client). We’ve measured Latency with different object sizes and rates when running using the following interconnects GigE, Mellanox ConnectX VPI 10GigE, and Mellanox ConnectX VPI 40Gb/s InfiniBand. I would like to re-iterate, to any of you who have not read the first posts, that we’re committed to our guideline of “out-of-the-box”, meaning that neither the application nor any of the drivers are to be changed after downloading it off of the web.

With InfiniBand we’ve used 3 different Upper-Layers-Protocols (ULPs) – none requiring code intervention; IPoIB connect-mode (CM), IPoIB datagram mode (UD), and Socket-Direct-Protocol (SDP). The results were stunning mainly because our assumption was that with all the layers of software, in addition to the software which converts Sparc Solaris code to x86 Linux code, the interconnect will have small impact, if at all.

We’ve learned that 40Gb/s InfiniBand performance is significantly better then GigE for a wide range of packets size and transmission rates. We could see superiority in latency of over 2x faster when using InfiniBand and 30% faster execution when using 10GigE. Go and beat that…


Let’s look at the results in a couple of different ways. In particular, let’s look at the size of the messages being sent – the above advantage is related to the small message sizes (see graph #2) while when moving to larger message sizes the advantage (which, as it is, is strikingly better) becoming humongous.
 

In my next blog I plan to show more results that are closely related to the financial markets. If anyone out there identifies an application they would like our dedicated team to benchmark, please step forward and send me an e-mail.

Nimrod Gindi
nimrodg@mellanox.com