Last week, Mellanox released the latest Microsoft WHQL certified Mellanox WinOF 2.0 (Windows OpenFabrics) drivers. This provides superior performance for low-latency, high-throughput clusters running on Microsoft Windows® HPC Server 2008.
You may be asking yourself, how does this address my cluster computing needs? Does the Windows OFED stack released by Mellanox provide the same performance seen on the Linux OFED stack release?
Well, the Windows networking stack is optimized to address the needs of various HPC vertical segments. In our benchmark tests with MPI applications that require low-latency and high-performance, the latency is in the low 1us with bandwidth of 3GByte/sec uni-directional using the Microsoft MS-MPI protocol.
Mellanox’s 40Gb/s InfiniBand Adapters (ConnectX) and Switches (InfiniScale IV) with their proven performance efficiency and scalability, allow data centers to scale up to tens-of-thousands of nodes with no drop in performance. Our drivers and Upper Level Protocols (ULPs) allow end-users to take advantage of the RDMA networking available in Windows® HPC Server 2008.
Here is the link to show the compute efficiency of Mellanox InfiniBand compute nodes compared to Gigabit Ethernet (GigE) compute nodes performing mathematical simulations on Windows® HPC Server 2008.
As the saying goes “The proof is in the pudding.” Mellanox InfiniBand interconnect adapters and technology is the best option for all Enterprise Data Center and High Performance computing needs.