Tag Archives: Add new tag

Unleashing Performance, Scalability and Productivity with Intel Xeon 5500 Processors “Nehalem”

The industry has been talking about it for a long time, but on March 30th, it was officially announced. The new Xeon 5500 “Nehalem” platform from Intel has introduced a totally new concept of server architecture for Intel-based platforms. The memory has moved from being connected to the chipset to be connected directly to the CPU, and the memory speed has increased. More importantly, PCI-Express (PCIe) Gen2 can now be fully utilized to unleash new performance and efficiency levels from Intel-based platforms. PCIe Gen2 is the interface between the CPU and memory to the networking that connects servers together to form compute clusters. With PCIe Gen2 now being integrated in compute platforms from the majority of OEMs, more data can be sent and received in a single server or blade. This means that applications can exchange data faster and complete simulations much faster, bringing a competitive advantage to end-users. In order to feed the PCIe Gen2, one needs to have a big pipe for his networking solutions, and this is what InfiniBand 40Gb/s brings to the table. No surprise that multiple server OEMs have announced the availability of 40Gb/s InfiniBand in conjunction with Intel announcement (for example HP and Dell).

 

I have been testing several applications to compare the performance benefits of Intel Xeon 5500 processors and Mellanox end-to-end 40Gb/s networking solutions. One of those applications was the Weather Research and Forecasting (WRF) application, widely used around the world. With Intel Xeon-5500-based servers and Mellanox 40Gb/s ConnectX InfiniBand adapters and MTS3600 36-port 40Gb/s InfiniBand switch system, we witnessed a 100% increase in performance and productivity over previous Intel platforms.

With a digital media rendering application – Direct Transport Compositor, we have seen a 100% increases in frames per second delivery, while increasing the screen anti-aliasing at the same time. Other applications have shown similar level of performance and productivity boost as well.

 

The reasons for the new performance levels are the decrease in the latency (1usec) and the huge increase in throughput (more than 3.2GB/s throughput uni-directional on more than 6.5GB/s bi-directional on a single InfiniBand port). With the increase in the number of CPU cores, and new server architecture, bigger pipes in and out from the servers are required in order to keep the system balanced and to avoid creating artificial bottlenecks. Another advantage for InfiniBand is its ability to use RDMA and transfer data directly to and from the CPU memory, without the involvement of the CPU in the data transfer activity. This mean one thing only – more CPU cycles can be dedicated to the applications!

 

Gilad Shainer

Director, HPC Marketing