Monthly Archives: November 2008

System Picking: Ready, Set, Go!

To recap my previous post, we’ve been setting the stage upon which vendors were to be evaluated and we’re ready for the “big race” (which we’ll do without “naming names”):

System: I’ve considered 2 different dense systems which both followed the CPU and memory requirements: dual-socket quad core, 16GB memory (2GB per core), and support for PCI-Express Gen2. One was a blade server system from a Tier-1 vendor and the other was a 1U server which provided more-for-less (2 servers in 1U). We reviewed power requirements from each (blades were better in this category), cost (differences were >10%) and space (1U servers saved some space). Also, if we didn’t need an external switch the blades would then require less (which impacts the big 3: Power, Cost, and Space).

I/O: We wanted to have all 3 dominant interconnects and reviewed switches and NICs separately.

Switches: 1GigE (many options in 1U and we just had to compare power and cost); 10GigE (there weren’t many options and we considered 3 options which varied in the performance they provided and the price), and 40Gb/s InfiniBand (from us/Mellanox).

NICs: 1GigE (we’ve decided to use the on-board); for 10GigE and 40Gb/s InfiniBand we picked our/Mellanox ConnectX adapters which provides the Virtual Protocol Interconnect (VPI) option (best-in-class performance with both 10GigE and 40Gb/s InfiniBand on the same NIC).

Storage: As mentioned in my previous posts I wanted to use a Tier-1 vendor which would provide us the access to all I/O options, and if necessary, add a gateway to enable all of the options. (I’m planning phase 2 which would include Tier-2 vendors as well, but it is yet to be executed). The choice was fairly easy due to the limited number of players in the storage arena.

Needless to say, we negotiated prices (hopefully effectively) and shared our concerns and performance targets with all vendors involved to help them come forward with the best system which met these requirements. As a result, we’ve been exposed to many future systems which promise to meet our requirements BUT keeping-ourselves-honest to the “off-the-shelf” criteria we initially set and promised to follow, we narrowed the “sea of promises” to what we can see, touch, and use today.

Picking the system proved to be a hard and a long process but nothing prepared me for the bureaucracy of the PO process  (which I won’t go into…). At the end of the day we chose 1U servers ands storage with block-storage with file-system overriding it.

I’ll finish-up with the saving numbers (if you would like additional details on this, you can send me an email) and in my next post I’ll shortly describe the labor pains of the hardware bring-up. Last, but not least, the HUGE differences: POWER saving at ~35%, CAP-EX saving over 25%, and SPACE saving cost at the 25% mark.

Nimrod Gindi
nimrodg@mellanox.com

Enterprise Data Center: Picking Hardware Can Be Hard Work

Re-capping last week’s post…I knew we wanted to have a system which would contain all the building blocks of the data center in a single (easily expendable) rack. Internally for Mellanox, I felt we should review the full procurement process to understand and provide data-center managers with better understanding/knowledge of the hard, and proven to be sometimes painful, process.

Now with that high level of understanding in place, we were required to start taking ideology to reality and decide on components to be purchased. I wish it was as simple as it sounded…let’s buy it (Storage, CPU, and I/O), receive it, use it   — ya, right. When a data center manager attempts to buy hardware for specific or a set-of applications, there are many parameters to take into consideration (I bet each of us unconsciously does this when buying something for home use).

CPU – “can you feel the need? The need for speed.” Tom Cruise’s words from Top Gun applies here better then ever – and yes, we felt it too . We wanted to consider a system which would have 8 cores (we do want it to be valid next Monday, and I guess 8 cores can carry us at least that far). Since time was essential, we couldn’t wait for next generation CPUs which were promised to be just around the corner.

Storage – when considering this component we had to ensure a stable platform with all features (DeDup, high availability, hot-spares etc.), we wanted to have a variety of speeds (from SAS/FC 15k RPM to SATA). We narrowed down things to having a block-storage with a file-system overriding it externally (which would enable us to use both when required).

I/O – we wanted to pick a variety of interconnects: 1GigE, 10GigE and 40Gb/s (QDR) InfiniBand. Having a Virtual Protocol Interconnect (VPI) available made our decision easier, as it covered 2 out of 3 in single low-power adapter.

Bearing in mind all the above, we needed to pass our options via several filters to help us zero-down on the right selection.

We started with the big 3: Business Alignment, Cost and Time.

Cost – this is a tricky one… you have CAP-EX and OP-EX, which means we were required to consider each component for being low on power consumption and still be priced at a low, reasonable price.

Time – we were eager to start, so delivery time was a factor…Waiting 4 months for something was out of the question.

Business Alignment – I guess this is the most important but hardest to capture. For us, it needed to meet the following: have all I/O options, be off-the-shelf products, and we needed them to be able to be used with any application “you’ll throw at them”.

If anyone ever thought the above took us all the way home…well, I guess he is in for some surprises…In my next blog post I’ll list what differences we’ve found between 2 set-ups, both of which could address our business needs but were very much different in other major parameters.

Enterprise Data Center – Where do we start?

From my experience of working with enterprise market users, I’ve learned that regardless of the fact that everyone uses similar building blocks for their data center, with similar requirements, there is a great concentration on the application which creates endless diversification in the deployment and need of application centric concrete data for CIOs to make a decision.

When moving back to our HQ earlier this year I was challenged on how to provide that information fast and effective.

Together with some of our marketing and architecture organizations individuals, the idea to “become an end-user” came up. Easier said than done…How does an engineering driven vendor do that?

I’ve targeted taking off-the-shelf components that typically compose enterprise data-centers to provide a complete solution and have them tested to provide the end-users some basic data points to consider (without/before any specific changes or tuning performed).

Continue reading