Monthly Archives: December 2008

Enabling the middle-ware to be super fast

As promised in my last post, and after reviewing the OP-EX and CAP-EX saving provided by looking at a Virtual Protocol Interconnect (VPI) oriented data center, we need to look at how the business can benefit from using such unified systems.

As described in my first post, we will be using off-the-shelf market-known applications from companies which are known in the industry. This post will review work done with GigaSpaces, a leading application provider in the financial sector, using their XAP 6.6.0.

Benchmark Software/Middleware components:
- GigaSpaces XAP 6.6.0
- GigaSpaces API: Java openspaces
- Space operation measured: write
- Sun JVM 1.6

We wanted to focus on one of the most important factors for the financial sector: low-latency and comparing the different interconnects: 1GigE, VPI (10GigE), and VPI (40Gigb/s InfiniBand). The results were stunning for both the “Mellanox High-Performance Enterprise Team” and GigaSpaces (who provided us great help in getting this benchmark running and analyzing the results).

The VPI (both IB and 10GbE) is better than GigE by 25 % to 100 % (the more partitions, more users, and larger objects to be used the more benefit the VPI technology will provide). When comparing the interconnect options provided by VPI, IB would see better performance than 10GbE. Latency as presented with GigaSpaces is below 1 ms transaction latency including sync with backup with 4K objects, with large amounts of concurrent users hitting the system in a high update rate. As you know, I truly believe in seeing the results and therefore below you’ll find the graphs of the results from our testing (which instantly generated quite of an interest with people in the industry).

In my next blog post I will review a variety of applications which we’ve conducted tests on – stay tuned.

But before I say my goodbyes I’ve got good news and bad news… Where to start?

Well, I’ll start with the bad – my next blog post will be taking place only next year; the good ones are (at least for me) that I’ll be on vacation

Have a happy new-year…
Nimrod Gindi
nimrodg@mellanox.com

Look at this beautiful rack!

This week’s blog is short, but it’s about the candy: the Rack — the Data Center’s building block.
The pictures below visually describe what each one of us would like to have in their Data Center.

Density – over 150 cores within less then 10U. Three different interconnects, 1GigE, 10GigE and 40Gb/s InfiniBand, using two adapters and no thick jungle of cables. –> 25% Savings in rack space.

Power – less servers, w/o giving up any compute power; less adapters, without giving up any capabilities; less switches, without giving up any reliability or bandwidth –> 35% Savings in power.

Cost – with a smaller amount of switches and smaller servers’ size, the saved space enables better cooling. Cost is (inevitably) lower by 25%.

Just imagine this Rack with only a single interconnect of choice, and you’ll experience what I and many people have seen: a simple tidy solution leads to better functioning of teams and faster responses to problems (if they ever occur).

Bringing the rack into a functional condition hasn’t been the easiest thing, I agree. When last time I said that some “labor pain” was involved, I mainly meant pain in finding a place in the data center… I never knew how hard it could be to allocate floor space before going through this experience. But once we got the rack built in place (standing there in the corner can be a bit claustrophobic  ), sliding in the servers and switches took almost zero time. And thanks to a pre-prepared image of the OS, the entire rack was up-and-running within less than 24 hours.

I’ll leave you at this point to see the rack for yourself. I’ll be back in my next post with the first market application that we’ve used with that “Data Center in a Rack” – GigaSpaces.

Nimrod Gindi
nimrodg@mellanox.com

 alt=