Category Archives: InfiniBand

Round Um Up! OpenStack Austin 2016 Had Something For Everybody

If you’re as lucky as us, you had the opportunity to attend OpenStack Summit 2016 in Austin, Texas this week. The event, which saw 7,500 attendees, is now at the crux of converging HPC, Scientific Computing, and the Cloud. We actually saw a number of market segments that OpenStack experienced the most significant traction on including: Academic/Research for scientific computing Telco/NFV, Cloud Service Providers and large enterprise for Pairs cloud and traditional enterprise. Mellanox is a leader in each of these areas and brings advanced technologies and expertise to help you get the most out of your OpenStack deployments. With our heritage in high performance networking, as well as InfiniBand and Ethernet solutions, Mellanox remains at the center of this convergence. This was evident in the crowd standing ten deep at times in front of the Mellanox booth. We were constantly swamped with the curious, solution seekers, technology lovers and old friends.

Openstack1

To further cement our position in the space and celebrate all things OpenStack, we took several giant leaps forward this week, including two major OpenStack announcements and several sessions at the Summit.

Partnering with the University of Cambridge
Due to unprecedented volumes of data and the need to provide quick and secure access to computing and storage resources, there is a transformation taking place in the way Research Computing Services are being delivered. This is why UoC selected our End-to-End Ethernet interconnect solution, including Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables, for its OpenStack-based scientific research cloud. This has expanded our existing footprint of InfiniBand solution and empowers the UoC to develop an architecture that will lay a foundation for Research Computing.

Powering Europe’s First Multi-Regional OpenStack-Based Public Cloud
One of our customers, Enter, has been building out their OpenStack cloud and adopting open software. In fact, Enter Cloud Suite is Europe’s first multi-regional OpenStack-based cloud, and we’re thrilled to announce that Enter selected our Open Composable Networks as the Ethernet network fabric for its Infrastructure-as-a-Service cloud offering.

Bringing OpenStack Hot Topics to Life
We found that OpenStack storage was a trending topic at the show and are proud to provide great options for this in the form of Ceph, Cinder, and Nexenta. No matter which option you choose, Mellanox interconnects deliver the best OpenStack storage integration, performance, and efficiency.

Another area we saw traction in was for Ethernet Switch solutions. The industry is currently experiencing a strong demand for integration of the converge/hyperconverge system with the network (NICs & switches). NEO is perfectly positioned for this challenge, making the network transparent using an enhanced REST API interface and plugins to OpenStack and other management platforms.

OpenStack-based cloud, like other Clouds, needs a fair switch and Mellanox Spectrum is well positioned for the task with customers realizing the enduring value of Spectrum.

Finally, Mellanox gave three highly successful talks at the show. If you missed any of them, you can view them here:
Mellanox Open Composable Networks
OpenStack Lightning Talk
Chasing the Rainbow: NCI’s Pursuit of High Performance OpenStack Cloud

A Look At The Latest Omni-Path Claims

4sec-thermometerOnce again, the temperature kicked up another few degrees in the interconnect debate with HPC Wire’s coverage based on information released by Intel on the growth of Omni-Path Architecture (OPA). According to Intel, the company behind OPA, have been seeing steady market traction. We have always expected Intel to win some systems, same as QLogic in the past or even Myricom years back; however, while I read over the article in detail, I couldn’t help but argue some of their points.

On Market Traction

Intel has seen continued delays in Omni-Path’s production release. We are not aware of any company that can buy any OPA offering in the channel, and OEMs have not released anything.

In the article, there are a number of public wins referenced including National Nuclear Security Administration’s Tri Labs (Capacity Technology Systems (CTS-1) program) and the Pittsburgh Supercomputing Center. The latter was built with non-production parts as they could not delay any further, and we have heard from sources that performance is lacking.

The specific Department of Energy deal with NNSA is part of the commodity track of the DoE labs, which is a set of small systems used for commodity work. It is not the DoE leadership systems, and we know that Lawrence Livermore National Laboratory decided to use InfiniBand for their next leader system – under the Coral project. The DoE did grant the previous commodity deal to QLogic TrueScale a few years ago, and QLogic has made the same noise we are hearing today – that they are allegedly gaining momentum over Mellanox.

Additionally, the CTS program (formally TLCC), enables a second tier of companies and helps labs to maintain multiple choices for technologies. The program results in building a small scale of systems that the labs are using for basic work, not for their major and high-scale applications. The previous TLCC was awarded to Appro and QLogic, and the current one to Penguin Computing and Intel OPA.

On A Hybrid Approach

Omni-Path is the same technology as the old technology, “InfiniPath” by Pathscale which was later bought and marketed by QLogic under the name “TrueScale.” Similar to QLogic with TrueScale, we believe any description of Omni-Path as a “hybrid” between off-loading and on-loading is likely not supported by the facts. Read more about it in my latest post for HPC Wire. You can see the system performance difference in various HPC application cases, such as WIEN2K, Quantum Espresso, and LS-DYNA.

On Performance

Intel chose to highlight message rate performance, stating “Compute data coming out of MPI tends to be very high message rate, relatively small size for each message, and highly latency sensitive. There we do use an on-load method because we found it to be the best way to move data. We keep in memory all of the addressing information for every node, core, and process running that requires this communications.” While previously Intel claimed 160M messages per second with OPA, they recently admitted it is closer to 79-100M. Mellanox delivers a superior solution with 150M messages per second.

Finally, as of today, Intel has not yet provided application performance benchmarks for OPA, that support details of the article, or offer substance to claims regarding its performance versus Mellanox’s InfiniBand. We have a number of case studies to prove the performance of InfiniBand.

We look forward to seeing what Intel comes out with next.

OpenStack Summit Austin 2016

The OpenStack Summit is a five-day conference for developers, users, and administrators of OpenStack cloud software. Held every six months, the conference schedule rotates based on the OpenStack software release cycle.  This week, the summit is being held in Austin, Texas at the Austin Convention Center.

keynote

The summit started yesterday and we had two successful sessions:

Open Composable Networks: Leverage LEGO Design to Transform Cloud Networking by Kevin Deierling, Mellanox VP Marketing

Kevin talked about a new way of cloud networking that stemmed from the hyper-scale cloud web services giants, but is being made available widely by Mellanox and our cloud ecosystem partners. He shared real world deployments of our OpenStack customers such as Cambridge, Enter and NCI and described the LEGO parts they have used such as: Mellanox NEO, our End-to-End 25/50/100G Ethernet and InfiniBand intelligent interconnect, etc…

Lightning Talks by Moshe Levi, SW Cloud Manager about Using Device Emulator to Enhance CI

Moshe talked about Mellanox SimX and explained how to reduce the number of physical servers and eliminate the physical device dependency in CI.

We invite you to visit Mellanox’s booth (D20) and see the 25/50/100G Cloud Solution based on Spectrum, ConnectX-4 and Mellanox NEO for Network Automation. Make sure to stop by and talk with us!  Here are some photos from yesterday’s sessions along with the Mellanox booth.

session1

session2

booth

One Step Closer to Exascale Computing: Switch-IB 2 & SHArP Technology

A typical metric used to evaluate network performance is its latency for point-to-point communications.  But more important, and sometimes overlooked, is the latency for collective communications, such as barrier synchronization used to synchronize a set of processes, and all-reduce, used to perform distributed reductions.  For many High-Performance-Computing applications, the performance of such collective operations play a critical role in determining overall application scalability and performance.  As such, a system-oriented approach to network design is essential for achieving the network performance needed to reach extreme system scales.

 

The CORE-Direct technology introduced by Mellanox was a first step at taking a holistic system view, by implementing the execution of collective communications in the network.  The SHArP technology being introduced is an extension of this technology, which moves support for collective communication from the network edges, e.g. the hosts, to the core of the network – the switch fabric.  Processing of collective communication moves to dedicated silicon within the InfiniBand Switch-IB 2 switch, thus providing the means for accelerating the performance of these collective operations by an order of magnitude

Continue reading

Dell releases FDR InfiniBand Switches from Mellanox

Today, Dell announced the release of Mellanox’s end-to-end FDR 56Gb/s InfiniBand Top of Rack (ToR) solutions with Dell.  Three switches will be available the Mellanox ToR SX6012 (12 port), SX6025 (36 port unmanaged) and SX6036 (36 port managed).  This was highlighted even further with Dell making Mellanox EDR 100Gb/s end-to-end InfiniBand Switches, Adapters and cables available through Dell S&P.  Customers now have unmatched interconnect available for the Dell PowerEdge Server Family that together will deliver unparalleled performance and scalability for high performance and data intensive applications.

 Mellanox InfiniBand FDR Switches

Mellanox FDR/EDR 56/100Gb/s InfiniBand adapters, switches, cables and software are the most efficient solutions for server and storage connectivity, delivering high throughput, low latency and industry-leading application performance for enterprises solutions that are widely deployed across HPC, Cloud, Web 2.0, and Big Data satisfying the most demanding data centers requirements.

 

The Dell PowerEdge Server Family has reinvented enterprise Data center solutions and data analytics by changing the equation of performance, space, power, economics and as a result delivering breakthrough performance at record setting efficiencies.

 

Together we are enabling customers to build highly efficient and scalable cluster solutions at a lower cost with less complexity, in less rack space.  In an effort to help customers realize the advantages of Dell Server and Storage platform designs combined with Mellanox high-performance interconnect, we are investing in upgrading the Dell Solution Centers in US, EMEA, & APJ with end-to-end InfiniBand technology,  enabling performance benchmarking and application level testing with the latest HPC technologies.

 

REFERENCES:

 

Mellanox and HP Collaborate Together with EDR 100Gb/s InfiniBand

Today, at ISC’15, we announced the growing industry-wide adoption of our end-to-end EDR 100Gb/s InfiniBand solutions.  This was highlighted even further with HP announcing End-to-End EDR 100Gb/s InfiniBand enablement plans across their Apollo Server Family that together deliver unparalleled performance and scalability for HPC and Big Data workloads.

 

We are thrilled to have HP’s HPC and Big Data team as a key technology partner enabling these verticals, the collaboration allows our companies to deliver optimized compute platforms with the most efficient & high-performance interconnect available on the market today.

 

Mellanox EDR 100Gb/s InfiniBand adapters, switches, cables and software are the most efficient solutions for server and storage connectivity, delivering high throughput, low latency and industry leading application performance for both HPC & Big Data applications.

 

The HP Apollo Server Family has reinvented High Performance Computing & Big Data analytics by changing the equation of performance, space and power, and as a result delivering breakthrough performance at record setting efficiencies.

 

Together we are enabling customers to build highly efficient and scalable cluster solutions at a lower cost, with simplicity, in less rack space.  In an effort to help customers realize the advantages of HP’s best-in-class  Server designs combined with Mellanox high-performance interconnect, we are investing in upgrading the HP competency & benchmarking centers in US, EMEA, & APJ with end-to-end EDR (100Gbs) InfiniBand technology & HP’s latest ProLiant Gen9 servers enabling performance benchmarking & application level testing with the latest HPC technologies.

 

How to Achieve Higher Efficiency in Software Defined Networks (SDN) Deployments

During the last couple of years, the networking industry has invested a lot of effort into developing Software Defined Network (SDN) technology, which is drastically changing data center architecture and enabling large-scale clouds without significantly escalating the TCO (Total Cost of Ownership).

 

The secret of SDN is not that it enables control of data center traffic via software–it’s not like IT managers were using screwdrivers before to manage the network–but rather that it affords the ability to decouple the control path from the data path.  This represents a major shift from the traditional data center networking architecture and therefore offers agility and better economics in modern deployments.

 

For readers who not familiar with SDN, a simple example can demonstrate the efficiency that SDN provides:   Imagine a traffic light that makes its own decisions as to when to change and sends data to the other lamps.  Now imagine if that were changed to a centralized control system that takes a global view of the entire traffic pattern throughout the city and therefore makes smarter decisions on how to route the traffic.

 

The centralized control unit tells each of the lights what to do (using a standard protocol), reducing the complexity of the local units while increasing overall agility. For example, in an emergency, the system can reroute traffic and allow rescue vehicles faster access to the source of the issue.

 

 Tokyo Traffic Control Center;  Photo Courtesy of @CScoutJapan
Tokyo Traffic Control Center, @CScoutJapan

Continue reading

CROC’s Public Cloud Goes at InfiniBand Speed

CROC is the number one IT infrastructure creation company in Russia, and one of Russia’s top 200 private companies.   CROC has become the first public cloud service provider in Russia to adopt InfiniBand—a standard for high-speed data transfer between servers and storage. Migration to a new network infrastructure took approximately one month and resulted in up to a ten-fold increase in cloud service performance.

Blog CROC High Speedometer

Continue reading

Introduction to InfiniBand

InfiniBand is a network communications protocol that offers a switch-based fabric of point-to-point bi-directional serial links between processor nodes, as well as between processor nodes and input/output nodes, such as disks or storage. Every link has exactly one device connected to each end of the link, such that the characteristics controlling the transmission (sending and receiving) at each end are well defined and controlled.

 

InfiniBand creates a private, protected channel directly between the nodes via switches, and facilitates data and message movement without CPU involvement with Remote Direct Memory Access (RDMA) and Send/Receive offloads that are managed and performed by InfiniBand adapters. The adapters are connected on one end to the CPU over a PCI Express interface and to the InfiniBand subnet through InfiniBand network ports on the other. This provides distinct advantages over other network communications protocols, including higher bandwidth, lower latency, and enhanced scalability.

Figure 1: Basic InfiniBand Structure
Figure 1: Basic InfiniBand Structure

Continue reading

Accelerating Genomic Analysis

One of the biggest catchphrases in modern science is Human Genome–the DNA coding that largely pre-determines who we are and many of our medical outcomes. By mapping and analyzing the structure of the human genetic code, scientists and doctors have already started to identify the causes of many diseases and to pinpoint effective treatments based on the specific genetic sequence of a given patient. With the advanced data that such analysis provides, doctors can offer more targeted strategies for potentially terminal patients at times when no other clinically relevant treatment options exist.

Brian Klaff 072314 Dell Genome
Continue reading