System Picking: Ready, Set, Go!

To recap my previous post, we’ve been setting the stage upon which vendors were to be evaluated and we’re ready for the “big race” (which we’ll do without “naming names”):

System: I’ve considered 2 different dense systems which both followed the CPU and memory requirements: dual-socket quad core, 16GB memory (2GB per core), and support for PCI-Express Gen2. One was a blade server system from a Tier-1 vendor and the other was a 1U server which provided more-for-less (2 servers in 1U). We reviewed power requirements from each (blades were better in this category), cost (differences were >10%) and space (1U servers saved some space). Also, if we didn’t need an external switch the blades would then require less (which impacts the big 3: Power, Cost, and Space).

I/O: We wanted to have all 3 dominant interconnects and reviewed switches and NICs separately.

Switches: 1GigE (many options in 1U and we just had to compare power and cost); 10GigE (there weren’t many options and we considered 3 options which varied in the performance they provided and the price), and 40Gb/s InfiniBand (from us/Mellanox).

NICs: 1GigE (we’ve decided to use the on-board); for 10GigE and 40Gb/s InfiniBand we picked our/Mellanox ConnectX adapters which provides the Virtual Protocol Interconnect (VPI) option (best-in-class performance with both 10GigE and 40Gb/s InfiniBand on the same NIC).

Storage: As mentioned in my previous posts I wanted to use a Tier-1 vendor which would provide us the access to all I/O options, and if necessary, add a gateway to enable all of the options. (I’m planning phase 2 which would include Tier-2 vendors as well, but it is yet to be executed). The choice was fairly easy due to the limited number of players in the storage arena.

Needless to say, we negotiated prices (hopefully effectively) and shared our concerns and performance targets with all vendors involved to help them come forward with the best system which met these requirements. As a result, we’ve been exposed to many future systems which promise to meet our requirements BUT keeping-ourselves-honest to the “off-the-shelf” criteria we initially set and promised to follow, we narrowed the “sea of promises” to what we can see, touch, and use today.

Picking the system proved to be a hard and a long process but nothing prepared me for the bureaucracy of the PO process  (which I won’t go into…). At the end of the day we chose 1U servers ands storage with block-storage with file-system overriding it.

I’ll finish-up with the saving numbers (if you would like additional details on this, you can send me an email) and in my next post I’ll shortly describe the labor pains of the hardware bring-up. Last, but not least, the HUGE differences: POWER saving at ~35%, CAP-EX saving over 25%, and SPACE saving cost at the 25% mark.

Nimrod Gindi
nimrodg@mellanox.com