Why FDR 56Gb/s InfiniBand?
Enables the highest performance and lowest latency
- – Proven scalability for tens-of-thousands of nodes
- – Maximum return-on-investment
Highest efficiency / maintains balanced system ensuring
- – Provides full bandwidth for PCI 3.0 servers
- – Proven in multi-process networking requirements
- – Low CPU overhead and high sever utilization
Performance driven architecture
- – MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bidirectional)
- – MPI message rate of >90 Million/sec
Superior application performance
- – From 30% to over 100% HPC application performance increase
- – Doubles the storage throughput, reducing backup time in half
What is FDR10 InfiniBand?
FDR10 InfiniBand is a Mellanox proprietary protocol similar in format
to FDR but running at a speed identical to 40Gb/s Ethernet. FDR10
supports InfiniBand at true 40Gb/s line speeds and FEC while taking
advantage of mid-planes, connectors, PCB materials and cables
designed for 40Gb/s Ethernet.
InfiniBand Market Applications
InfiniBand is increasingly becomes an interconnect of choice in not
just high-performance computing environments, but also in mainstream enterprise grids, data center virtualization solutions, storage,
and embedded environments. The low latency and high-performance
of InfiniBand coupled with the economic benefits of its consolidation and virtualization capabilities provides end-customers the ideal
combination as they build out their applications.
Why Mellanox 10/40GbE?
Mellanox’s scale-out 10/40GbE products enable users to benefit
from a far more scalable, lower latency, and virtualized fabric
with lower overall fabric costs and power consumption, greater
efficiencies, and simplified management than traditional Ethernet
fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches
and fabric optimization software, a broader array of end-users can
benefit from a more scalable and high-performance Ethernet fabric.