Silicon Photonics: Using the Right Technology for Data Center Applications

At DesignCon last week, I followed a speaker who ended his presentation with a quote from Mark Twain, “the reports of my death have been greatly exaggerated!”  The speaker was talking about copper cabling on a panel entitled, Optical Systems Technologies and Integration.”  He showed some nice charts on high speed signaling over copper, making the point that copper will be able to scale to speeds of 100 Gb/s.

 

As next speaker on the panel, I assured him that those of us who come from optical communications are nottrying to kill copper.” Rather, the challenge for companies like Mellanox, an end-to-end interconnect solutions company for InfiniBand and Ethernet applications, is to provide the “right technology for the application.”  I spoke about the constraints of 100 Gb/s pipes and our solutions.

 

 

Continue reading

Mellanox Technologies Delivers the World’s First 40GbE NIC for OCP Servers

Last year, Open Compute Project (OCP) launched a new network project focused on developing operating system agnostic switches to address the need for a highly efficient and cost effective open switch platform. Mellanox Technologies collaborated with Cumulus Networks and the OCP community to define unified and open drivers for the OCP switch hardware platforms. As a result, any software provider can now deliver a networking operating system to the open switch specifications on top of the Open Network Install Environment (ONIE) boot loader.

At the upcoming OCP Summit, Mellanox will present recent technical advances such as loading Net-OS on an x86 system with ONIE, OCP platform control using Linux sysfs calls, full L2 and L3 Open Ethernet Switch API, and also demonstrate Open SwitchX SDK. To support this, Mellanox developed SX1024-OCP, a SwitchX®-2-based TOR switch which supports 48 10GbE SFP+ ports and up to 12 40GbE QSFP ports.

The SX1024-OCP enables non-blocking connectivity within the OCP’s Open Rack and 1.92Tb/s throughput. Alternatively,40GBE NIC designed with OCP Compliant ConnectX-3 it can enable 60 10GbE server ports when using QSFP+ to SFP+ breakout cables to increase rack efficiency for less bandwidth demanding applications.

Mellanox also introduced SX1036-OCP, a SwitchX-2-based spine switch, which supports 36 40GbE QSFP ports. The SX1036-OCP enables non-blocking connectivity between the racks. These open source switches are the first switches on the market to support ONIE over x86 dual core processors.

Continue reading

Mellanox Boosts SDN and Open Source with New Switch Software

Authored by:  Amir Sheffer, Sr. Product Manager

This week we reinforced our commitment to Open Ethernet, open source and Software Defined Networking (SDN). With the latest software package for our Ethernet switches. Mellanox has added support for two widely used tools—OpenFlow and Puppet, among other important features.

The introduction of the new functionality allows users to move towards using more SDN and automation in their data centers. Compared to custom CLI scripts, OpenFlow and Puppet enable customers to control and monitor switches in a unified, centralized manner, thus simplifying the overall network management effort, with less time and cost. Forwarding rules, policies and configurations can be set once then applied to many switches across the network, automatically.

 

Amir Sheffer Blog 010714

Flexible OpenFlow Support

Mellanox Ethernet switches can now operate in OpenFlow hybrid switch mode, and expose both an OpenFlow forwarding pipeline and a locally-managed switching and routing pipeline. The OpenFlow forwarding pipeline utilizes thousands of processing rules (or flows), the highest number in the industry.

Switches interface with an OpenFlow controller using an integrated OpenFlow agent that allows direct access to the SwitchX®-2-based switch forwarding and routing planes.  The hybrid switch model provides the most robust, easy-to-use and efficient implementation, as it can forward a packet according to the OpenFlow configuration, when such a match is found, or can handle it by its forwarding/routing pipeline, according to the locally-managed switch control applications.

 

 

OpenFlow and Puppet 011014 - Diagram2

 

This allows customers to implement OpenFlow rules where they provide the most benefit without needing to move every switch completely to OpenFlow-only management. By processing non-OpenFlow data through its local management plane and leveraging the local forwarding pipeline, the hybrid switch increases network performance and efficiency, through faster processing of new flows as well as lower load on the controllers.

This is much more flexible than another OpenFlow switch mode called OpenFlow-only. This mode does not allow the switch to have a local control plane, so each and every flow must be configured by the OpenFlow controller, which in turn creates high load on the controllers, resulting in high latency and low efficiency.

Open-Source Automation via Puppet

Further enhancing the openness of our switches and the standardization of configuration, Mellanox switches now integrate the Puppet™ automation software agent. Puppet provides an open-source-based standard interface for device configuration and management. Tasks, such as software downloads, port configurations, and VLAN management can be managed automatically according to defined policies.  Mellanox’s implementation of the Puppet agent is Netdev, which is a vendor-neutral network abstraction framework. Mellanox Netdev has been submitted to the DevOps community and can be downloaded for free.

Customers have the choice to manage our switches using a CLI, Web GUI, SNMP, XML, and now Puppet and OpenFlow. This allows the flexibility to design the easiest and most scalable management solution for each environment, and expands Mellanox’s commitment to open source.

 OpenFlow and Puppet 011014 - Diagram3 revised

Mellanox is involved and contributes to other open source projects, such as OpenStackONIE, Quagga and others, and already contributed certain adaptor applications to the open source community. Mellanox is also a leading member and contributor of the Open Compute Project, where it provides NICs, switches and software.

RESOURCES

 

 

Process Affinity: Hop on the Bus, Gus!

This is an excerpt of a post published today on the Cisco HPC Networking blog by Joshua Ladd, Mellanox:

At some point in the process of pondering this blog post I noticed that my subconscious had, much to my annoyance, registered a snippet of the chorus to Paul Simon’s timeless classic “50 Ways to Leave Your Lover” with my brain’s internal progress thread. Seemingly, endlessly repeating, billions of times over (well, at least ten times over) the catchy hook that offers one, of presumably 50, possible ways to leave one’s lover – “Hop on the bus, Gus.” Assuming Gus does indeed wish to extricate himself from a passionate predicament, this seems a reasonable suggestion. But, supposing Gus has a really jilted lover; his response to Mr. Simon’s exhortation might be “Just how many hops to that damn bus, Paul?”

HPC practitioners may find themselves asking a similar question, though in a somewhat less contentious context (pun intended.) Given the complexity of modern HPC systems with their increasingly stratified memory subsystems and myriad ways of interconnecting memory, networking, computing, and storage components such as NUMA nodes, computational accelerators, host channel adapters, NICs, VICs, JBODs, Target Channel Adapters, etc., reasoning about process placement has become a much more complex task with much larger performance implications between the “best” and the “worst” placement policies. To compound this complexity, the “best” and “worse” placement necessarily depends upon the specific application instance and its communication and I/O pattern. Indeed, an in-depth discussion on Open MPI’s sophisticated process affinity system is far beyond the scope of this humble blog post and I refer the interested reader to the deep dive talk Jeff Squyres (Cisco) gave at Euro MPI on this topic.

In this posting I’ll only consider the problem framed by Gus’ hypothetical query; How can one map MPI processes as close to an I/O device as possible thereby minimizing data movement or ‘hops’ through the intranode interconnect for those processes? This is a very reasonable request but the ability to automate this process has remained mostly absent in modern HPC middleware. Fortunately, powerful tools such as “hwloc” are available to help us with just such a task. Hwloc usually manipulates processing units and memory, but it can also discover I/O devices and report their locality as well. In simplest terms, this can be leveraged to place I/O intensive applications on cores near the I/O devices they use. Whereas Gus probably didn’t have the luxury to choose his locality so as to minimize the number of hops necessary to get on his bus, Open MPI, with the help of hwloc, now provides a mechanism for mapping MPI processes to NUMA nodes “closest” to an I/O device.

Read the full text of the blog here.

Joshua Ladd is an Open MPI developer & HPC algorithms engineer at Mellanox Technologies.  His primary interests reside in algorithm design and development for extreme-scale high performance computing systems. Prior to joining Mellanox Technologies, Josh was a staff research scientist at the Oak Ridge National Lab where he was engaged in R&D on high-performance communication middleware.  Josh holds a B.S., M.S., and Ph.D. all in applied mathematics.

 

Virtual Modular Switch (VMS): A Network Evolution Story – Part 2

Distributed elements, in any sector, have their basic benefits and drawbacks compared to a single large tool.  It is similar to the preference of using small aircraft over a jumbo 747 for carrying passengers between proximate airfields or to using a bus vs. multiple private cars to move a football team around.

 

In networking, the analysis between a Virtual Modular Switch (VMS) and a Modular switch is cost and performance driven. A network facilitator will prefer a solution that gets the job done at the lowest cost. Such an analysis will produce different results based on the cluster’s size. If the number of network ports required for the solution can be fitted into a single chassis based device, this means that the use of the chassis, although equipped with redundant peripheral elements such as fans and power units, is presenting a single point of failure in the network. In order to solve this, a second chassis is introduced for sharing the load and provide connectivity in case of chassis failure.

 

From a financial point of view, assuming you had a chassis of 1000 ports in full use, you need to deploy a solution of 2000 ports for high availability purposes which means a 100% price increase. Using 2/3 of the ports in the chassis will translate to 200% increase on top of the real required investment and more such examples are easy to find. Other problem with the chassis is that it comes in very few form factors so if your solution requires 501 ports while the chassis of choice supports 500, you need to add another and pay the double cost.

 

Alternatively, breaking the solution into multiple devices in a VMS gives both improved granularity in terms of port count and high availability in terms of impact from failure. In loose terms, if the VMS consists of 20 switches, the failure of a switch translates to 5% loss of network capacity. Regardless from how powerful and complicated the chassis is, this is a classic case where the strength of many tops the strength of one.

 

Ran Almog VMS Part 2

 

 

Continue reading

Mellanox Video Recap of Supercomputing Conference 2013 – Denver

We want thank everyone for joining us at SC13 in Denver, Colorado last month. We hope you had a chance to become more familiar with our end-to-end interconnect solutions for HPC.

 

Check out the  videos of the presentations given during the Mellanox Evening Event, held on November 20, 2013 in the Sheraton Denver Downtown Hotel. The event was keynoted by Eyal Waldman, President and CEO of Mellanox Technologies:

Continue reading

Symantec’s Clustered File Storage over InfiniBand

Last week (on December 9th, 2013), Symantec announced the GA of their clustered file storage (CFS). The new solution enables customers to access mission critical data and applications 400% faster than traditional Storage Area Networks (SANs) at 60% of the cost.

Faster is cheaper! Sounds like magic! How they are doing it?

Try to understand the “magic”:  It is important to understand the advantages that using SSD with high performance interconnect enable in the modern scale-out (or clustered) storage systems.   Up to now, SAN-based storage has typically been used to increase performance and provide data availability for multiple applications and clustered systems. However, with the recent high-performance applications demand, SAN vendors are trying to add SSD into the storage array itself to provide higher bandwidth and lower latency response.

Since SSDs offer an incredibly high number of IOPS and bandwidth, it is important to use the right interconnect technology and to avoid bottlenecks associated with access to storage. Old fabric, like Fibre Channel (FC) cannot cope with faster pipe demands, as 8Gb/s (or even 16Gb/s) bandwidth performance is not good enough to satisfy the applications requirements.  While 40Gb/s Ethernet may look like an alternative, InfiniBand (IB) currently supports up to 56Gb/s, with a roadmap to 100Gb/s in next year.

Continue reading

CloudNFV Proof-of-Concept Approved by ETSI ISG

Mellanox is a CloudNFV integration partner providing ConnectX-3 and ConnectX-3 PRO 10/40GbE NIC on Dell Servers

The CloudNFV team will be starting PoC execution in mid-January, reporting on our results beginning of February, and contributing four major documents to the ISG’s process through the first half of 2014.” said Tom Nolle, President of CIMI Corporation, Chief Architect of CloudNFV in his recent related blog.and enabling active high performance data center.  Telefonica and Sprint have agreed to sponsor

tomnolle_new

the CloudNFV PoC.

We’re already planning additional PoCs, some focusing on specific areas and developed by our members and some advancing the boundaries of NFV into the public and private cloud and into the world of pan-provider services and global telecommunications.

Mellanox server and storage interconnect enable telecom data plane virtual network functions with near bare metal server performance in OpenStack Cloud environment through integration to NFV Orchestration and SDN platforms.

Read more:   The CloudNFV Proof-of-Concept Was Approved by the ETSI ISG!

eran bello
Author: As a Director of Business Development at Mellanox, Eran Bello handles the business, solutions and product development and strategy for the growing Telecom and Security markets. Prior to joining Mellanox, Eran was Director of Sales and Business Development at Anobit Technologies where he was responsible for the development of the ecosystem for Anobit new Enterprise SSD business as well as portfolio introduction and business engagements with key Server OEMs, Storage Solution providers and mega datacenters. Earlier on Eran was VP of Marketing and Sales for North and Central America at Runcom Technologies, the first company to deliver Mobile WiMAX/4G End to End solution and was a member of the WiMAX/4G Forum.

The Train Has Left the Station, Open Ethernet is Happening

Authored by:   Amit Katz – Sr. Director, Product Management

Customers are tired of paying huge sums of money for Ethernet switches for no good reason. At some point, OpenFlow seemed like the way to change the networking world, but various factors such as overlay networks, changing market interests, and other unforeseen developments, it is hard to view OpenFlow today as a game-changer. While it remains a very important technology and provides a valuable mean of implementing certain functionalities, it has not created a revolution in the networking industry.

 

The real revolution that is occurring today is based on a combination of the momentum gained by the Open Compute Platform and the increasing number of switch software and hardware suppliers. Initiatives to open the switch, such as Mellanox’s Open Ethernet that was announced earlier this year, have placed us on the right path to bringing networking to where servers are today: affordable, open, and software-defined.

 

But is this revolution all about saving on cost? Not at all – cost is important but flexibility, openness, and the freedom to choose are equally important. One of the key elements in enabling vendor selection elasticity is Open Network Install Environment (ONIE), which decouples the switch hardware from its software, enabling vendors to provide something very similar to what we see in the server world: hardware without an Operating System. That means customer can buy a server with many ports and install their choice of OS on top of it. In the event that the customer wants to change the OS, the lion’s share of the investment (the hardware piece) is protected.

Continue reading

Mellanox Congratulates Yarden Gerbi

Mellanox congratulates Yarden Gerbi for winning the Gold medal in the recent Israeli Judo competition.  Mellanox will sponsor Gerbi throughout her training toward the 2016 Rio Olympic games.  Yarden Gerbi is the 2013 Judo World Champion in the under 63kg (139 lbs.) category and ranked first worldwide.  Mellanox will sponsor her as she attempts to qualify for and compete in the Olympic Games in Rio de Janeiro, Brazil.

 

Photo Credit:  Oron Kochman
Photo Credit: Oron Kochman

Continue reading