Mellanox 2016: The Year In Review, Part II

 
Connect X-6, InfiniBand, Link-X, Switches

‘Fantastical’ Networking Achievements and Where To Find Them Part II

In Part I of this look back on 2016, we covered the amazing accomplishments in Enterprise Data Centers, LinkX®, SoC, Cloud and more. Now, in Part II, we will delve into the many accomplishments in HPC, AI, and InfiniBand.

2016 was the year that Mellanox announced 200Gb/s HDR InfiniBand Solutions with record levels of performance and scalability. As the world’s first 200Gb/s data center interconnect solutions, Mellanox ConnectX®-6 adapters, Quantum switches and LinkX® cables and transceivers together provide a complete 200Gb/s HDR InfiniBand interconnect infrastructure for the next generation of high performance computing, machine learning, big data, cloud, web 2.0 and storage platforms accelerating the next generation of data centers.

It was also a year for awards and recognitions with Mellanox Technologies receiving six HPCwire Readers’ and Editors’ Choice Awards at the Supercomputing Conference. Mellanox  was honored with six HPCwire Readers’ and Editors’ Choice Awards spanning a variety of categories and acknowledge the company’s achievements in delivering high-performance interconnect technology that enables the highest performance, most efficient compute and storage platforms. Mellanox Vice President of Marketing, Gilad Shainer, received an award for outstanding leadership in HPC for his individual contributions, including his eight years of service as Chairman of the HPC Advisory council and his role in the development of Co-Design architecture, to the community over the course of his career.

As we continue to push the boundaries of technology, Mellanox announced a new line of InfiniBand router systems, expanding data center scalability and enabling infrastructure flexibility. The EDR 100Gb/s InfiniBand router systems provide superior performance to HPC, Cloud, Web 2.0 and Enterprise applications. The new EDR 100Gb/s InfiniBand Routers enable a new level of scalability critical for the next generation of mega data-center deployments as well as expanded capabilities for data center isolations between different users and applications. Some great coverage can be found: LINK

And not to be outdone by breakthrough products and multiple awards, Mellanox solutions also accelerated the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (three times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.

Other milestones included Mellanox and the Pacific Northwest National Laboratory announcing a joint collaboration to design an Exascale system, the University of Tokyo selecting Mellanox EDR Infiniband to accelerate its newest supercomputer and the announcement of the ConnectX-5, the next generation of 100G InfiniBand and Ethernet smart interconnect adapter.

In other technology arenas, Mellanox continued to drive machine learning and artificial intelligence to new heights. This is an area where both our Ethernet and Infiniband outshine the competition. For example, Mellanox is the sole InfiniBand interconnect provider for NVIDIA’s deep learning solution and our 50GbE is the only solution driving Facebook’s OCP-based Big Sur platform: https://www.youtube.com/watch?v=1ayGJDO6PKU. We also received an award from Baidu for Technology Leadership that is helping drive that company to new levels of machine learning. We joined forces with JD.com to drive e-commerce artificial intelligence. And we helped move the needle on Artificial Intelligence speech recognition with our 25G/100G Ethernet solutions at iFLYTEK, one of China’s leading intelligent speech and language technologies’ companies. By choosing Mellanox’s end-to-end 25G and 100G Ethernet solutions based on ConnectX adapters and Spectrum™ switches for their next generation machine learning center, iFLYTEK can now achieve a high speech recognition rate of 97 percent.

HPC and AI were not the only markets where Mellanox dominated in 2016. In June, the NVM Express Organization released version 1.0 of the NVM Express over Fabrics (NVMf) Standard. NVMf allows the new high performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks like Infiniband and RoCE (RDMA over Converged Ethernet).. This is the first new, built from the ground up, networked storage technology to be developed in over 20 years. Coupled with new Ethernet and InfiniBand speeds of 100Gb/s, NVMf will not only dramatically improve the performance of existing storage network applications, but will also accelerate the adoption of many new and future computer technologies like Scale-out and Software Defined Storage, Hyperconverged Infrastructure, and Compute/Storage disaggregation. Just a few short months later, Huawei, a leading global information and communications technology (ICT) solutions provider, and Mellanox previewed the industry’s first resource pooling solution based on NVM Express® over Fabrics (NVMe-oF™) at HUAWEI CONNECT 2016. According to test statistics, the solution delivers higher storage bandwidth and performance, and boosts the random read/write performance by at least 2,000 times compared with the traditional SAS solution. Now, that’s progress!

Without telling any tales out of school, we can still let everyone know that 2017 at Mellanox promises to bring even more amazing breakthrough technologies and solutions so stay tuned and continue to check out our Mellanox blogs.

About Julie M. Dibene

Julieanne DiBene is the Senior Director of Marketing Communications at Mellanox. Prior to this, she was the Director of Marketing Communications with Micrel Inc., a semiconductor manufacturer. She holds a Bachelor of Arts degree in Journalism from San Jose State University.

Comments are closed.