Benefits


Why FDR 56Gb/s InfiniBand?

Enables the highest performance and lowest latency

  • Proven scalability for tens-of-thousands of nodes
  • Maximum return-on-investment

Highest efficiency / maintains balanced system ensuring highest productivity

  • Provides full bandwidth for PCI 3.0 servers
  • Proven in multi-process networking requirements
  • Low CPU overhead and high sever utilization

Performance driven architecture

  • MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bidirectional)
  • MPI message rate of >90 Million/sec

Superior application performance

  • From 30% to over 100% HPC application performance increase
  • Doubles the storage throughput, reducing backup time in half

 

InfiniBand Market Applications

InfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in mainstream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolidation and virtualization capabilities provides end-customers the ideal combination as they build out their applications.

 

Why Mellanox 10/40GbE?

Mellanox's scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric.

 

Mellanox IBM Contact:
Jim Lonergan
OEM Business Development Mgr.
Mellanox Technologies
Tel: (512) 897-8245
james@mellanox.com

  • Mellanox InfiniBand Configurator
  • IBM Reference Guide
  • IBM Redbooks