The need to analyze growing amounts of data,
to support complex simulations, to overcome performance bottlenecks and to create intelligent data algorithms requires the ability to manage and carry out computational operations on the data as
it is being transferred by the data center interconnect. Mellanox InfiniBand solutions incorporate
In-Network Computing technology that performs data algorithms within the network devices, delivering ten times higher performance, and enabling the era of “data-centric” data centers. By delivering the fastest data speed, lowest latency, smart accelerations and highest efficiency and resiliency, InfiniBand is the best choice to connect the World’s Top HPC and Artificial Intelligence Supercomputers.
Canadian Federation of Medical Students

The performance of the system gives us such a huge advantage and it allows us to deploy solutions much more quickly

Oak Ridge
National Laboratory

InfiniBand gives us the very high bandwidth we need to address some of our most important applications

Advanced Research Computing
University of Toronto

InfiniBand gives us a significant increase in leverage which allows us to make use of the 60 thousand cores of our system

Climate Computing Centre

The key component of the HPC computers today is the InfiniBand connect so the main advantage of the solution that’s provided by Mellanox is that of reliability and performance

Supercomputer Centre

We chose a co-design approach, the appropriate hardware, and designed the system. This system was of course targeted at supporting in the best possible manner our key applications. The only interconnect that really could deliver that was HPC Testimonia.

Wyoming Supercomputing Center

NCAR center focused on atmospheric sciences. The supercomputer uses the InfiniBand interconnect in a full fat-tree topology, and is very well utilized and efficient in part because of the interconnect.

The Information technology Center
University of Tokyo

InfiniBand EDR are equipped with offloading engine. The offloading engine is a meaningful function for achieving high performance in large scale applications compared to other networks.

University of Birmingham

One of the big reasons we use InfiniBand and not an alternative is that we’ve got backwards compatibility with our existing solutions.

Shanghai Jiaotong University

HPC Testimonial is the most advanced high performance interconnect technology in the world, with dramatic communication overhead reduction that fully unleashes cluster performance.

The Centre for High Performance Computing
South Africa

The heartbeat of the cluster is the interconnect. Everything is about how all these processes shake hands and do their work. InfiniBand and the interconnect is, in my opinion, what defines HPC.

Shanghai Supercomputer Center

InfiniBand is the most used interconnect in the HPC market. It has complete technical support and a rich software eco-system, including its comprehensive management tools. Additionally, it works seamlessly with all kinds of HPC applications, including both commercial applications and the open source codes that we are currently using.

San Diego Supercomputing Center

In HPC, the processor should be going 100% of the time on a science question, not on a communications question. This is why the offload capability of Mellanox‘s network is critical.

iCER, Institute for Cyber-Enabled Research
Michigan State University

We have users that move 10s of terabytes of data and this needs to happen very very rapidly. InfiniBand is the way to do it.

Mellanox Technologies

When we hear our customers talk about why they chose Mellanox we understand, they are the same reasons why we choose Mellanox every time too.