The need to analyze growing amounts of data,
to support complex simulations, to overcome performance bottlenecks and to create intelligent data algorithms requires the ability to manage and carry out computational operations on the data as
it is being transferred by the data center interconnect. Mellanox InfiniBand solutions incorporate
In-Network Computing technology that performs data algorithms within the network devices, delivering ten times higher performance, and enabling the era of “data-centric” data centers. By delivering the fastest data speed, lowest latency, smart accelerations and highest efficiency and resiliency, InfiniBand is the best choice to connect the World’s Top HPC and Artificial Intelligence Supercomputers.
The Texas Advanced Computing Center

We've worked with Mellanox for 12 years from our top 10 “ranger” system in 2007 up to our latest machine Frontera…frankly InfiniBand is the best, it gives us this latency and bandwidth performance that we need

Verne Global

We evaluated a number of different providers… so price, efficiency of service but ultimately performance was the most important reason why we chose Mellanox’s InfiniBand Interconnect


Join us for a tour around the International Supercomputing Conference (ISC) 2019 as we vi sit our partner’s technology showcase, and find out why they chose Mellanox 200G HDR InfiniBand Solutions for best performance and best ROI

Canadian Federation of Medical Students

The performance of the system gives us such a huge advantage and it allows us to deploy solutions much more quickly

Oak Ridge
National Laboratory

InfiniBand gives us the very high bandwidth we need to address some of our most important applications

Advanced Research Computing
University of Toronto

InfiniBand gives us a significant increase in leverage which allows us to make use of the 60 thousand cores of our system

Climate Computing Centre

The key component of the HPC computers today is the InfiniBand connect so the main advantage of the solution that’s provided by Mellanox is that of reliability and performance

Supercomputer Centre

We chose a co-design approach, the appropriate hardware, and designed the system. This system was of course targeted at supporting in the best possible manner our key applications. The only interconnect that really could deliver that was HPC Testimonia.

Wyoming Supercomputing Center

NCAR center focused on atmospheric sciences. The supercomputer uses the InfiniBand interconnect in a full fat-tree topology, and is very well utilized and efficient in part because of the interconnect.

The Information technology Center
University of Tokyo

InfiniBand EDR are equipped with offloading engine. The offloading engine is a meaningful function for achieving high performance in large scale applications compared to other networks.

University of Birmingham

One of the big reasons we use InfiniBand and not an alternative is that we’ve got backwards compatibility with our existing solutions.

Shanghai Jiaotong University

HPC Testimonial is the most advanced high performance interconnect technology in the world, with dramatic communication overhead reduction that fully unleashes cluster performance.

The Centre for High Performance Computing
South Africa

The heartbeat of the cluster is the interconnect. Everything is about how all these processes shake hands and do their work. InfiniBand and the interconnect is, in my opinion, what defines HPC.

Shanghai Supercomputer Center

InfiniBand is the most used interconnect in the HPC market. It has complete technical support and a rich software eco-system, including its comprehensive management tools. Additionally, it works seamlessly with all kinds of HPC applications, including both commercial applications and the open source codes that we are currently using.

San Diego Supercomputing Center

In HPC, the processor should be going 100% of the time on a science question, not on a communications question. This is why the offload capability of Mellanox‘s network is critical.

iCER, Institute for Cyber-Enabled Research
Michigan State University

We have users that move 10s of terabytes of data and this needs to happen very very rapidly. InfiniBand is the way to do it.

Mellanox Technologies

When we hear our customers talk about why they chose Mellanox we understand, they are the same reasons why we choose Mellanox every time too.

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.