SUNNYVALE, Calif. and YOKNEAM, Israel – May 24, 2010 – Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of high-performance, end-to-end connectivity solutions for data center servers and storage systems, today announced that its ConnectX®-2 40Gb/s InfiniBand adapters with NVIDIA GPUDirect™ technology, IS5600 648-port switch with FabricIT™ fabric management software and fiber optic cables are providing the Institute of Process Engineering (IPE) at the Chinese Academy of Science with world-leading networking and application acceleration for the Mole-8.5 system, the first Petaflop GPGPU supercomputer in China. IPE is currently utilizing the Mole-8.5 to conduct scientific simulations in areas such as chemical engineering, material science, biochemistry, data and image processing, oil exploitation and recovery and metallurgy.
“By incorporating Mellanox 40Gb/s InfiniBand with NVIDIA GPUDirect technology, we have been able to conduct scientific simulations using GPUs at performance levels that we would never have been able to achieve using a different interconnect,” said Dr. Xiaowei Wang of IPE. “The new Mole-8.5 Petaflop cluster, with industry-leading interconnect performance and efficiency, enables us to shorten the time it takes to run applications that are critical in the process of scientific discovery.”
The Mole-8.5 system was designed to achieve high efficiency in real applications with a low cost in establishment and power consumption. By using Mellanox InfiniBand with NVIDIA GPUDirect technology, the GPUs compute at a much faster rate, increasing the performance of applications run on the Mole-8.5. Mellanox InfiniBand delivers up to 96 percent system utilization, allowing users to maximize their return-on-investment for their high-performance computing server and storage infrastructure.
“We are pleased to see increased adoption of Mellanox 40Gb/s InfiniBand switches, adapters and cable solutions in world-leading supercomputers,” said John Monson, vice president of marketing at Mellanox Technologies. “IPE has leveraged Mellanox’s lowest latency switch and adapter performance, transport offloads and NVIDIA GPUDirect application acceleration to enable its HPC applications to derive optimal performance at the highest efficiency.”
Mellanox’s end-to-end InfiniBand connectivity, consisting of ConnectX®-2 I/O adapter products, cables and comprehensive IS5000 family of fixed and modular switches and fabric management software, deliver industry-leading performance, efficiency and economics for the best return-on-investment for performance interconnects. Mellanox provides its worldwide customers with the richest, most advanced and highest performing end-to-end networking solutions for the world’s most compute-demanding applications.
- Learn more about Mellanox 40Gb/s InfiniBand adapters
- Learn more about Mellanox 40Gb/s InfiniBand switches
- Learn more about Mellanox fabric management software
- Learn more about Mellanox cables
- Follow Mellanox on Twitter and Facebook
Mellanox Technologies is a leading supplier of end-to-end connectivity solutions for servers and storage that optimize data center performance. Mellanox products deliver market-leading bandwidth, performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution. For the best in performance and scalability, Mellanox is the choice for Fortune 500 data centers and the world’s most powerful supercomputers. Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam,Israel. For more information, visit Mellanox at www.mellanox.com.
Mellanox, BridgeX, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. CORE-Direct and FabricIT are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.