It is that time of the year at Mellanox, where we proudly present some of the coolest things our team has worked on! This time it is going to be at the Open Compute Project (OCP) Summit which will be held in the heart of Silicon Valley – San Jose Convention Center on March 11-12, 2015. It is impressive to see how hyper-scale architecture has been revolutionized in just 4 years.
What started as a small project from the basement of Facebook office in Palo Alto has come alive in the form of some cutting edge innovation in racks, server, networking and storage. Some of these innovations from Mellanox will take the center stage during the OCP summit that will accelerate the advancement of data center components, mainly server and networking. Key highlights during the OCP events are:
ConnectX-4 and Multi-Host: Back in November, Mellanox announced the industry’s first 100GbE interconnect adapter pushing the innovation in the networking arena in HPC, Cloud, Web2.0, storage and enterprise applications. With a throughput of 100 Gb/s, bidirectional throughput of 195 Gb/s, application latency of 610 nanoseconds and message rate of 149.5 million messages per second, ConnectX-4 InfiniBand adapters provide the means to increase data center return on investment while reducing IT costs.
You realize that it is time to get a new car. You go to the local dealer and look around. You spot the car you want, and it meets your budget. You take a look at the list of features you wrote down at home, and you check each and every one of them.
A salesperson approaches you and asks: “How may I help you?”
You say: “I like this one, is it possible to take it to a test drive?”
The salesperson says: “Of course, may I see your driver’s license?” You hand him your license.
He takes a look and says: “I’m sorry, but you will need to take additional driving lessons to drive this model.”
“What? I don’t mind reading the owner’s manual, but why driving lessons? This is a car, not a school bus!”
Obviously, each car make and model is different, but they share so much of their functionality, that you can change from one make and model to another almost instantly. If that wasn’t the case, we would all be forced to buy the same model again and again.
So why not in Ethernet switching?
The Ethernet switch industry faces a similar conflict. In the heart of almost every Ethernet switch you would find a switching ASIC, and while every ASIC is different, they share a lot of functionality. So yes, you should read the manual to operate them correctly, but why do you need driving lessons to use an ASIC from another manufacturer?
Last year, Open Compute Project(OCP) launched a new network project focused on developing operating system agnostic switches to address the need for a highly efficient and cost effective open switch platform. Mellanox Technologies collaborated with Cumulus Networks and the OCP community to define unified and open drivers for the OCP switch hardware platforms. As a result, any software provider can now deliver a networking operating system to the open switch specifications on top of the Open Network Install Environment (ONIE) boot loader.
At the upcoming OCP Summit, Mellanox will present recent technical advances such as loading Net-OS on an x86 system with ONIE, OCP platform control using Linux sysfs calls, full L2 and L3 Open Ethernet Switch API, and also demonstrate Open SwitchX SDK. To support this, Mellanox developed SX1024-OCP, a SwitchX®-2-based TOR switch which supports 48 10GbE SFP+ ports and up to 12 40GbE QSFP ports.
The SX1024-OCP enables non-blocking connectivity within the OCP’s Open Rack and 1.92Tb/s throughput. Alternatively, it can enable 60 10GbE server ports when using QSFP+ to SFP+ breakout cables to increase rack efficiency for less bandwidth demanding applications.
Mellanox also introduced SX1036-OCP, a SwitchX-2-based spine switch, which supports 36 40GbE QSFP ports. The SX1036-OCP enables non-blocking connectivity between the racks. These open source switches are the first switches on the market to support ONIE over x86 dual core processors.