Testimonials

The need to analyze growing amounts of data,
to support complex simulations, to overcome performance bottlenecks and to create intelligent data algorithms requires the ability to manage and carry out computational operations on the data as
it is being transferred by the data center interconnect. Mellanox InfiniBand solutions incorporate
In-Network Computing technology that performs data algorithms within the network devices, delivering ten times higher performance, and enabling the era of “data-centric” data centers. By delivering the fastest data speed, lowest latency, smart accelerations and highest efficiency and resiliency, InfiniBand is the best choice to connect the World’s Top HPC and Artificial Intelligence Supercomputers.
CFMS
Canadian Federation of Medical Students
Canada


The performance of the system gives us such a huge advantage and it allows us to deploy solutions much more quickly

Oak Ridge
National Laboratory
USA


InfiniBand gives us the very high bandwidth we need to address some of our most important applications

SciNet
Advanced Research Computing
University of Toronto


InfiniBand gives us a significant increase in leverage which allows us to make use of the 60 thousand cores of our system

DKRZ
Climate Computing Centre
Germany


The key component of the HPC computers today is the InfiniBand connect so the main advantage of the solution that’s provided by Mellanox is that of reliability and performance

Julich
Supercomputer Centre
Germany


We chose a co-design approach, the appropriate hardware, and designed the system. This system was of course targeted at supporting in the best possible manner our key applications. The only interconnect that really could deliver that was HPC Testimonia.

NCAR
Wyoming Supercomputing Center
USA


NCAR center focused on atmospheric sciences. The supercomputer uses the InfiniBand interconnect in a full fat-tree topology, and is very well utilized and efficient in part because of the interconnect

The Information technology Center
University of Tokyo
Japan


InfiniBand EDR are equipped with offloading engine. The offloading engine is a meaningful function for achieving high performance in large scale applications compared to other networks.

University of Birmingham
England
 


One of the big reasons we use InfiniBand and not an alternative is that we’ve got backwards compatibility with our existing solutions.

Shanghai Jiaotong University
China
 


HPC Testimonia is the most advanced high performance interconnect technology in the world, with dramatic communication overhead reduction that fully unleashes cluster performance.

CHPC
The Centre for High Performance Computing
South Africa


The heartbeat of the cluster is the interconnect. Everything is about how all these processes shake hands and do their work. InfiniBand and the interconnect is, in my opinion, what defines HPC.

Shanghai Supercomputer Center
China
 


InfiniBand is the most used interconnect in the HPC market. It has complete technical support and a rich software eco-system, including its comprehensive management tools. Additionally, it works seamlessly with all kinds of HPC applications, including both commercial applications and the open source codes that we are currently using.

SDSC
San Diego Supercomputing Center
USA


In HPC, the processor should be going 100% of the time on a science question, not on a communications question. This is why the offload capability of Mellanox‘s network is critical.

iCER, Institute for Cyber-Enabled Research
Michigan State University
USA


We have users that move 10s of terabytes of data and this needs to happen very very rapidly. InfiniBand is the way to do it