InfiniBand Connectivity for Manufacturing
Mechanical computer-aided design (MCAD) and computer-aided engineering (CAE) systems are integral parts of the design and development process for manufacturers. As MCAD and CAE software becomes more sophisticated, manufacturers have adopted HPC cluster computing environments to speed processing times and reduce time to revenue for new products.
|Thermal comfort CFD simulation||Volvo crash simulation|
The Connectivity Challenge
HPC cluster environments employ multi-core, multi-processor servers and high-speed storage. But without a high-performance network connecting them, clustered server performance is wasted while data moves through the network bottleneck. In order to maintain a balanced system and to achieve high optimum performance for MCAD and CAE simulations, the network interconnect must eliminate this bottleneck and provide high bandwidth overhead with minimum latency.
The Mellanox® Solution
Mellanox’s high-performance InfiniBand connectivity solutions maximize the cluster compute environment’s efficiency and scalability. Mellanox’s 56Gb/s InfiniBand is designed for multi-core, multi-processor environments and can efficiently handle multiple data streams simultaneously while guaranteeing fast and reliable data transfer for each stream. Mellanox InfiniBand enables scalable, fast communication among servers and storage to maximize HPC productivity for manufacturing, speeding development time and reducing time to market.
Key Mellanox Advantages
- The world’s fastest interconnect, supporting up to 56Gb/s
- Latency as low as 1 microsecond
- Full CPU offload with the flexibility of RDMA capabilities to reduce traditional network protocol processing from the CPU and increase the processor efficiency.
- I/O Capex reduction – one 40Gb/s Mellanox adapter carries more traffic with higher reliability than four 10 Gigabit Ethernet adapters.